Generative models are an important class of unsupervised learning that has been receiving a lot of attention in these last few years. This class learns the distribution of the training data, and models it so that it is possible to generate new instances that appear to be from the same dataset. In other words, the model learns to estimate the probability density function of the data. Generative Modelling has been applied to several areas: some examples are Google DeepMind’s speech generator, “WaveNet”, OpenAi’s GPT2 network, able to generate coherent text or image generator “BigGAN”.
These models can be implemented using classic Machine Learning algorithms, like Gaussian Mixture Models (GMM), or Deep Learning algorithms, known as Deep Gerentaive models. The Deep Generative models can then be divided into different categories depending on how they perform the density estimation task:
- Variational autoencoders (VAE) which use latent variable models to explicitly model the probability density function of the training set. Through the process of downsampling the image to a compressed latent space and then decode it to regenerate the input image, it can optimize the log-likelihood of the data by maximizing the evidence lower bound (ELBO).
- Autoregressive models and flow-based models create an explicit density model. By using the autoregressive rule or the change of variable formula, these models can tractably maximize the likelihood of training data.
- Generative Adversarial Network (GAN) composed by a generator network coupled to a discriminator network that forces the first to produce images that are realistic enough to make it believe they are as real as the ground truth ones (implicit density estimation).
In Medical Imaging, Generative Models can be used for data augmentation, a critical point due to the usual unavailability of large datasets in the field; Anomaly detection (or detecting out-of-distribution examples), important for triaging and unsupervised segmentation; Image Super-Resolution; and Domain Adaptation.
Tudosiu, P.D., Varsavsky, T., Shaw, R., Graham, M., Nachev, P., Ourselin, S., Sudre, C.H. and Cardoso, M.J., 2020. arXiv preprint arXiv:2002.05692.