Machine Learning

Function space analysis of deep learning representation layers

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 1 year, 1 month ago.


  • arXiv
    5 pts

    Function space analysis of deep learning representation layers

    In this paper we propose a function space approach to Representation Learning and the analysis of the representation layers in deep learning architectures. We show how to compute a weak-type Besov smoothness index that quantifies the geometry of the clustering in the feature space. This approach was already applied successfully to improve the performance of machine learning algorithms such as the Random Forest and tree-based Gradient Boosting. Our experiments demonstrate that in well-known and well-performing trained networks, the Besov smoothness of the training set, measured in the corresponding hidden layer feature map representation, increases from layer to layer. We also contribute to the understanding of generalization by showing how the Besov smoothness of the representations, decreases as we add more mis-labeling to the training data. We hope this approach will contribute to the de-mystification of some aspects of deep learning.

    Function space analysis of deep learning representation layers
    by Oren Elisha, Shai Dekel
    https://arxiv.org/pdf/1710.03263v1.pdf

You must be logged in to reply to this topic.