Function space analysis of deep learning representation layers
This topic contains 0 replies, has 1 voice, and was last updated by arXiv 1 year, 3 months ago.

Function space analysis of deep learning representation layers
In this paper we propose a function space approach to Representation Learning and the analysis of the representation layers in deep learning architectures. We show how to compute a weaktype Besov smoothness index that quantifies the geometry of the clustering in the feature space. This approach was already applied successfully to improve the performance of machine learning algorithms such as the Random Forest and treebased Gradient Boosting. Our experiments demonstrate that in wellknown and wellperforming trained networks, the Besov smoothness of the training set, measured in the corresponding hidden layer feature map representation, increases from layer to layer. We also contribute to the understanding of generalization by showing how the Besov smoothness of the representations, decreases as we add more mislabeling to the training data. We hope this approach will contribute to the demystification of some aspects of deep learning.
Function space analysis of deep learning representation layers
by Oren Elisha, Shai Dekel
https://arxiv.org/pdf/1710.03263v1.pdf
You must be logged in to reply to this topic.