Machine Learning

Multi-Generator Generative Adversarial Nets

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 1 year, 6 months ago.


  • arXiv
    5 pts

    Multi-Generator Generative Adversarial Nets

    We propose in this paper a novel approach to address the mode collapse problem in Generative Adversarial Nets (GANs) by training many generators. The training procedure is formulated as a minimax game among many generators, a classifier, and a discriminator. Generators produce data to fool the discriminator while staying within the decision boundary defined by the classifier as much as possible; classifier estimates the probability that a sample came from each of the generators; and discriminator estimates the probability that a sample came from the training data rather than from all generators. We develop theoretical analysis to show that at equilibrium of this system, the Jensen-Shannon divergence between the equally weighted mixture of all generators’ distributions and the real data distribution is minimal while the Jensen-Shannon divergence among generators’ distributions is maximal. Generators can be trained efficiently by utilizing parameter sharing, thus adding minimal cost to the basic GAN model. We conduct extensive experiments on synthetic and real-world large scale data sets (CIFAR-10 and STL-10) to evaluate the effectiveness of our proposed method. Experimental results demonstrate the superior performance of our approach in generating diverse and visually appealing samples over the latest state-of-the-art GAN’s variants.

    Multi-Generator Generative Adversarial Nets
    by Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung
    https://arxiv.org/pdf/1708.02556v3.pdf

You must be logged in to reply to this topic.