Machine Learning

Class-Splitting Generative Adversarial Networks

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 1 year, 4 months ago.

  • arXiv
    5 pts

    Class-Splitting Generative Adversarial Networks

    Generative Adversarial Networks (GANs) produce systematically better quality samples when class label information is provided., i.e. in the conditional GAN setup. This is still observed for the recently proposed Wasserstein GAN formulation which stabilized adversarial training and allows considering high capacity network architectures such as ResNet. In this work we show how to boost conditional GAN by augmenting available class labels. The new classes come from clustering in the representation space learned by the same GAN model. The proposed strategy is also feasible when no class information is available, i.e. in the unsupervised setup. Our generated samples reach state-of-the-art Inception scores for CIFAR-10 and STL-10 datasets in both supervised and unsupervised setup.

    Class-Splitting Generative Adversarial Networks
    by Guillermo L. Grinblat, Lucas C. Uzal, Pablo M. Granitto

You must be logged in to reply to this topic.