Machine Learning

A Learning Approach to Secure Learning

Tagged: , , , ,

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 1 year, 4 months ago.


  • arXiv
    5 pts

    A Learning Approach to Secure Learning

    Deep Neural Networks (DNNs) have been shown to be vulnerable against adversarial examples, which are data points cleverly constructed to fool the classifier. Such attacks can be devastating in practice, especially as DNNs are being applied to ever increasing critical tasks like image recognition in autonomous driving. In this paper, we introduce a new perspective on the problem. We do so by first defining robustness of a classifier to adversarial exploitation. Next, we show that the problem of adversarial example generation and defense both can be posed as learning problems, which are duals of each other. We also show formally that our defense aims to increase robustness of the classifier. We demonstrate the efficacy of our techniques by experimenting with the MNIST and CIFAR-10 datasets.

    A Learning Approach to Secure Learning
    by Linh Nguyen, Arunesh Sinha
    https://arxiv.org/pdf/1709.04447v1.pdf

You must be logged in to reply to this topic.