Machine Learning

Adversarial Dropout for Supervised and Semi-supervised Learning

Tagged: , ,

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 1 year, 6 months ago.


  • arXiv
    5 pts

    Adversarial Dropout for Supervised and Semi-supervised Learning

    Recently, the training with adversarial examples, which are generated by adding a small but worst-case perturbation on input examples, has been proved to improve generalization performance of neural networks. In contrast to the individually biased inputs to enhance the generality, this paper introduces adversarial dropout, which is a minimal set of dropouts that maximize the divergence between the outputs from the network with the dropouts and the training supervisions. The identified adversarial dropout are used to reconfigure the neural network to train, and we demonstrated that training on the reconfigured sub-network improves the generalization performance of supervised and semi-supervised learning tasks on MNIST and CIFAR-10. We analyzed the trained model to reason the performance improvement, and we found that adversarial dropout increases the sparsity of neural networks more than the standard dropout does.

    Adversarial Dropout for Supervised and Semi-supervised Learning
    by Sungrae Park, Jun-Keon Park, Su-Jin Shin, Il-Chul Moon
    https://arxiv.org/pdf/1707.03631v2.pdf

You must be logged in to reply to this topic.