Machine Learning

Adaptive Laplace Mechanism: Differential Privacy Preservation in Deep Learning

Tagged: , , ,

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 1 year, 4 months ago.


  • arXiv
    5 pts

    Adaptive Laplace Mechanism: Differential Privacy Preservation in Deep Learning

    In this paper, we focus on developing a novel mechanism to preserve differential privacy in deep neural networks, such that: (1) The privacy budget consumption is totally independent of the number of training steps; (2) It has the ability to adaptively inject noise into features based on the contribution of each to the output; and (3) It could be applied in a variety of different deep neural networks. To achieve this, we figure out a way to perturb affine transformations of neurons, and loss functions used in deep neural networks. In addition, our mechanism intentionally adds “more noise” into features which are “less relevant” to the model output, and vice-versa. Our theoretical analysis further derives the sensitivities and error bounds of our mechanism. Rigorous experiments conducted on MNIST and CIFAR-10 datasets show that our mechanism is highly effective and outperforms existing solutions.

    Adaptive Laplace Mechanism: Differential Privacy Preservation in Deep Learning
    by NhatHai Phan, Xintao Wu, Han Hu, Dejing Dou
    https://arxiv.org/pdf/1709.05750v1.pdf

You must be logged in to reply to this topic.