New Perspectives on Generative Adversarial Networks
Generative Adversarial Networks (GANs) are a popular (deep learning) generative modeling approach that is known for producing appealing samples, but their theoretical properties are not yet fully understood, and they are notably difficult to train. In the first part of this talk, I will provide some insights on why GANs are a more meaningful framework to model high dimensional data like images than the more traditional maximum likelihood approach, interpreting them as "parametric adversarial divergences" and rooting the analysis with statistical decision theory. In the second part of the talk, I will address the difficulty of training GANs from the optimization perspective by importing tools from the mathematical programming literature. I will survey the "variational inequality" framework which contains most formulations of GANs introduced so far, and present theoretical and empirical results on adapting the standard methods (such as the extragradient method) from this literature to the training of GANs.