Question 1
Which of the following deep generative models explicitly models the data distribution using a series of invertible transformations?
Question 2
In the context of a Variational Autoencoder (VAE), what is the primary purpose of the 'reparameterization trick'?
Question 3
Which of the following statements best describes the concept of 'mode collapse' in Generative Adversarial Networks (GANs)?
Question 4
Which component of a Variational Autoencoder (VAE) is responsible for mapping the input data to a latent space representation?
Question 5
Flow-based generative models are particularly advantageous for tasks requiring exact likelihood computation because: