High Dimensional Spaces, Deep Learning and Adversarial Examples
This topic contains 0 replies, has 1 voice, and was last updated by arXiv 1 year, 1 month ago.

High Dimensional Spaces, Deep Learning and Adversarial Examples
In this paper, we analyze deep learning from a mathematical point of view and derive several novel results. The results are based on intriguing mathematical properties of high dimensional spaces. We first look at perturbation based adversarial examples and show how they can be understood using topological arguments in high dimensions. We point out fallacy in an argument presented in a published paper in 2015 by Goodfellow et al., see reference, and present a more rigorous, general and correct mathematical result to explain adversarial examples in terms of image manifolds. Second, we look at optimization landscapes of deep neural networks and examine the number of saddle points relative to that of local minima. Third, we show how multiresolution nature of images explains perturbation based adversarial examples in form of a stronger result. Our results state that expectation of $L_2$norm of adversarial perturbations shrinks to 0 as image resolution becomes arbitrarily large. Finally, by incorporating the partswhole manifold learning hypothesis for natural images, we investigate the working of deep neural networks and root causes of adversarial examples and discuss how future improvements can be made and how adversarial examples can be eliminated.
High Dimensional Spaces, Deep Learning and Adversarial Examples
by Simant Dube
https://arxiv.org/pdf/1801.00634v4.pdf
You must be logged in to reply to this topic.