Machine Learning

Learning Differentially Private Language Models Without Losing Accuracy

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 1 year, 1 month ago.


  • arXiv
    5 pts

    Learning Differentially Private Language Models Without Losing Accuracy

    We demonstrate that it is possible to train large recurrent language models with user-level differential privacy guarantees without sacrificing predictive accuracy. Our work builds on recent advances in the training of deep networks on user-partitioned data and privacy accounting for stochastic gradient descent. In particular, we add user-level privacy protection to the federated averaging algorithm, which makes “large step” updates from user-level data. Our work demonstrates that given a dataset with a sufficiently large number of users (a requirement easily met by even small internet-scale datasets), achieving differential privacy comes at the cost of increased computation, rather than in decreased utility as in most prior work. We find that private LSTM language models are quantitatively and qualitatively similar to un-noised models when trained on a large dataset.

    Learning Differentially Private Language Models Without Losing Accuracy
    by H. Brendan McMahan, Daniel Ramage, Kunal Talwar, Li Zhang
    https://arxiv.org/pdf/1710.06963v1.pdf

You must be logged in to reply to this topic.