Deep Generative Models

Probabilistic foundations and learning algorithms.

About Us

Generative models are a key paradigm for probabilistic reasoning within graphical models and probabilistic programming languages. It is one of the exciting and rapidly-evolving fields of statistical machine learning and artificial intelligence.


In this course, we will study the probabilistic foundations and learning algorithms for deep generative models and discuss application areas that have benefitted from deep generative models.



Instructors

Volodymyr Kuleshov

Assistant Professor
Cornell Tech


Volodymyr Kuleshov focuses on machine learning and its applications in scientific discovery, health, and sustainability. He is also the co-founder of Afresh, a startup that uses AI to significantly driving down food waste.

 

Aditya Grover

Assistant Professor
UCLA


Aditya Grover's research is centered around machine learning with limited supervision, and am currently focussing on probabilistic generative modeling and sequential decision making under uncertainty.

 

Yang Song

Assistant Professor
CalTech


Yang Song (宋飏) focuses on developing scalable methods for modeling, analyzing and generating complex, high-dimensional data.

 

Stefano Ermon

Associate Professor
Stanford


Stefano Ermon enables innovative solutions to problems of broad societal relevance through advances in probabilistic modeling, learning and inference.

 

Hongjun Wu

Producer,
Cornell Tech


Hongjun Wu (吴泓骏) is interested in applying machine learning techniques to optimize 3D animation and video games. He is working on automated procedural mesh and shader generation with artificial intelligence.

 


What's Inside

Machine Learning Algorithms

A broad overview of the field of ML.

Introduction to algorithms from a broad range of areas across machine learning—-generative models, support vector machines, tree-based algorithms, neural networks, gradient boosting, and more.

 

Mathematical Foundations

A rigorous definition of key concepts.

Algorithms derived from first principles using mathematical notation. Rigorous introduction to key concepts in machine learning.

 

Algorithm Implementations

Every algorithm is implemented in Python.

Executable lecture notes—-Jupyter notebooks that display algorithm definitions and their implementations side-by-side. Over 20 algorithms are implemented from scratch in Python.

 


General Information

What you will learn

Key elements of this course include:

  • Foundational probabilistic principles of deep generative models.
  • DGM learning algorithms, and popular model families.
  • Applications in domains such as computer vision, NLP, and biomedicine.

Prerequisites

This PhD-level course requires a background in mathematics and programming at the level of Master's level courses.

  • Basic knowledge about machine learning from at least one of: CS4780, CS4701, CS5785.
  • Basic knowledge of probabilities and calculus: students will work with computational and mathematical models.
  • Basic knowledge of deep neural networks (CNNs, RNNs; CS5787). Extensive experience implementing deep neural networks is not required but will be helpful for the class project.
  • Proficiency in a programming language (preferably Python) will be helpful for completing the class project if you want to perform an implementation.

 


Lecture 1: Introduction.

Natural agents excel at discovering patterns, extracting knowledge, and performing complex reasoning based on the data they observe. How can we build artificial learning systems to do the same?

Supervised Learning Unsupervised Learning Reinforcement Learning

Lecture 2: Introduction to Probabilistic Modeling.

Probabilistic Models Discriminative Models Generative Models

In this lecture, we define probabilistic Models of the Data, compare discriminative models with generative models, and give our audience a first glimpse of deep generative models.


Lecture 3: Autoregressive Models.

Optimization by gradient descent Normal equations Polynomial feature expansion Extensions of linear regression

We will discuss basic and modern autoregressive models. We will also discuss how recurrent neural networks can work as autoregressive models.


Lecture 4: Maximum Likelihood Learning.

Maximum Likelihood KL-Divergence Monte Carlo Estimation Gradient Descent Bias/Variance Tradeoff

This lecture is all about maximizing the likelihood from data. We will discuss topics such as KL-Divergence and Monte Carlo Estimation, and look at statistical issues and bias/Variance tradeoff in maximum likelihood learning.


Lecture 5: Latent Variable Models.

Gaussian Mixture Models Deep Latent Gaussian Models Variational Inference Maximum Marginal Likelihood Learning

Latent Variable Models is a very useful tool in our generative models toolbox. We will compare and give examples of shallow and deep latent variable models, and take a look at how to approximate marginal likelihood using variational inference.




Lecture 8: Advanced Flow Models.

Triangular Jacobians Autoregressive Flows Probability Distillation

Now that we have some knowledge in normalizing flows, we can begin discussing advanced flow models using Triangular Jacobians, autoregressive flows, probability distillation, and parallel wavenet.


Lecture 9: Generative Adversarial Networks.

Likelihood-Free Learning Generative Adversarial Networks

We will now start walking into the direction of likelihood-free learning, and discuss the hot topic of Generative Adversarial Networks (GANs).



Lecture 11: Energy-Based Models.

Energy-Based Models Representation Learning

In this lecture, we explain energy-based models, representation, and learning of energy-based models.


Lecture 12: Score-Based Generative Models.

Score Functions Score Matching Sample Generation Diffusion Models

In this lecture, we will explore score-based generative models and their connection to diffusion models.


Lecture 13: Probabilistic Reasoning.

Probabilistic Reasoning Probabilistic Inference Probabilistic Programming

We will explore probabilistic reasoning, which is an approach to machine learning in which we work with structured probabilistic models that encode our understanding of the problem.


Lecture 14: Combining Generative Model Families.

NN Perceptrons Multi-layer Neural Networks

We have covered several useful building blocks: autoregressive, latent variable models, flow models, GANs. Can we combine them in many ways to achieve different tradeoffs?


Lecture 15: Evaluating Generative Models.

Density Estimation Sample Quality Latent Variables

Quantitative evaluation of generative models is a challenging task, how should we compare performance of different models?