Generative Adversarial Training for Gaussian Mixture Models

Wednesday, February 17, 2021 - 4:00pm to 4:30pm

Event Calendar Category

LIDS & Stats Tea

Speaker Name

Farzan Farnia



Zoom meeting id

972 9350 1600

Join Zoom meeting


Generative adversarial networks (GANs) learn the distribution of observed samples through a zero-sum game between two machine players, a generator and a discriminator. While GANs achieve great success in learning the complex distribution of image and text data, they perform suboptimally in learning multi-modal distribution-learning benchmarks including Gaussian mixture models (GMMs). In this talk, we propose Generative Adversarial Training for Gaussian Mixture Models (GAT-GMM), a minimax GAN framework for learning GMMs. Motivated by optimal transport theory, we design the zero-sum game in GAT-GMM using a random linear generator and a softmax-based quadratic discriminator architecture, which leads to a non-convex concave minimax optimization problem. We show that a Gradient Descent Ascent (GDA) method converges to an approximate stationary minimax point of the GAT-GMM optimization problem. In the benchmark case of a mixture of two symmetric Gaussians, we further show this stationary point recovers the true parameters of the underlying GMM. We numerically demonstrate that GAT-GMM can perform as well as the expectation-maximization algorithm in learning mixtures of Gaussians.



Farzan Farnia is a postdoctoral research associate at the Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, where he is co-supervised by Asu Ozdaglar and Ali Jadbabaie. Prior to joining MIT, Farzan received his master’s and PhD degrees in electrical engineering from Stanford University and his bachelor’s degrees in electrical engineering and mathematics from Sharif University of Technology. At Stanford, he was a graduate research assistant at the Information Systems Laboratory advised by David Tse. Farzan's research interests include information theory, statistical learning theory, and convex optimization. He has been the recipient of the Stanford graduate fellowship (Sequoia Capital fellowship) from 2013-2016 and the Numerical Technology Founders Prize as the second top performer of Stanford's electrical engineering PhD qualifying exams in 2014.