Machine Learning

EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 1 year, 3 months ago.


  • arXiv
    5 pts

    EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples

    Recent studies have highlighted the vulnerability of deep neural networks (DNNs) to adversarial examples – a visually indistinguishable adversarial image can easily be crafted to cause a well-trained model to misclassify. Existing methods for crafting adversarial examples are based on $L_2$ and $L_infty$ distortion metrics. However, despite the fact that $L_1$ distortion accounts for the total variation and encourages sparsity in the perturbation, little has been developed for crafting $L_1$-based adversarial examples. In this paper, we formulate the process of attacking DNNs via adversarial examples as an elastic-net regularized optimization problem. Our Elastic-net Attacks to DNNs (EAD) feature $L_1$-oriented adversarial examples and include the state-of-the-art $L_2$ attack as a special case. Experimental results on MNIST, CIFAR10 and ImageNet show that EAD can yield a distinct set of adversarial examples and attains similar attack performance to the state-of-the-art methods in different attack scenarios. More importantly, EAD leads to improved attack transferability and complements adversarial training for DNNs, suggesting novel insights on leveraging $L_1$ distortion in adversarial learning and security implications for DNNs.

    EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
    by Pin-Yu Chen, Yash Sharma, Huan Zhang, Jinfeng Yi, Cho-Jui Hsieh
    https://arxiv.org/pdf/1709.04114v1.pdf

You must be logged in to reply to this topic.