Machine Learning

Emergence of Invariance and Disentangling in Deep Representations

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 1 year, 1 month ago.


  • arXiv
    5 pts

    Emergence of Invariance and Disentangling in Deep Representations

    Using established principles from Information Theory and Statistics, we show that in a deep neural network invariance to nuisance factors is equivalent to information minimality of the learned representation, and that stacking layers and injecting noise during training naturally bias the network towards learning invariant representations. We then show that, in order to avoid memorization, we need to limit the quantity of information stored in the weights, which leads to a novel usage of the Information Bottleneck Lagrangian on the weights as a learning criterion. This also has an alternative interpretation as minimizing a PAC-Bayesian bound on the test error. Finally, we exploit a duality between weights and activations induced by the architecture, to show that the information in the weights bounds the minimality and Total Correlation of the layers, therefore showing that regularizing the weights explicitly or implicitly, using SGD, not only helps avoid overfitting, but also fosters invariance and disentangling of the learned representation. The theory also enables predicting sharp phase transitions between underfitting and overfitting random labels at precise information values, and sheds light on the relation between the geometry of the loss function, in particular so-called “flat minima,” and generalization.

    Emergence of Invariance and Disentangling in Deep Representations
    by Alessandro Achille, Stefano Soatto
    https://arxiv.org/pdf/1706.01350v2.pdf

You must be logged in to reply to this topic.