On the convergence properties of GAN training
This topic contains 0 replies, has 1 voice, and was last updated by arXiv 1 year ago.

On the convergence properties of GAN training
Recent work has shown local convergence of GAN training for absolutely continuous data and generator distributions. In this note we show that the requirement of absolute continuity is necessary: we describe a simple yet prototypical counterexample showing that in the more realistic case of distributions that are not absolutely continuous, unregularized GAN training is generally not convergent. Furthermore, we discuss recent regularization strategies that were proposed to stabilize GAN training. Our analysis shows that while GAN training with instance noise or gradient penalties converges, WassersteinGANs and WassersteinGANsGP with a finite number of discriminator updates per generator update do in general not converge to the equilibrium point. We explain these results and show that both instance noise and gradient penalties constitute solutions to the problem of purely imaginary eigenvalues of the Jacobian of the gradient vector field. Based on our analysis, we also propose a simplified gradient penalty with the same effects on local convergence as more complicated penalties.
On the convergence properties of GAN training
by Lars Mescheder
https://arxiv.org/pdf/1801.04406v1.pdf
You must be logged in to reply to this topic.