Gradient descent GAN optimization is locally stable
This topic contains 0 replies, has 1 voice, and was last updated by arXiv 11 months ago.

Gradient descent GAN optimization is locally stable
Despite the growing prominence of generative adversarial networks (GANs), optimization in GANs is still a poorly understood topic. In this paper, we analyze the “gradient descent” form of GAN optimization i.e., the natural setting where we simultaneously take small gradient steps in both generator and discriminator parameters. We show that even though GAN optimization does not correspond to a convexconcave game (even for simple parameterizations), under proper conditions, equilibrium points of this optimization procedure are still emph{locally asymptotically stable} for the traditional GAN formulation. On the other hand, we show that the recently proposed Wasserstein GAN can have nonconvergent limit cycles near equilibrium. Motivated by this stability analysis, we propose an additional regularization term for gradient descent GAN updates, which emph{is} able to guarantee local stability for both the WGAN and the traditional GAN, and also shows practical promise in speeding up convergence and addressing mode collapse.
Gradient descent GAN optimization is locally stable
by Vaishnavh Nagarajan, J. Zico Kolter
https://arxiv.org/pdf/1706.04156v3.pdf
You must be logged in to reply to this topic.