Machine Learning

Learning More Universal Representations for Transfer-Learning

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 11 months, 3 weeks ago.


  • arXiv
    5 pts

    Learning More Universal Representations for Transfer-Learning

    Transfer learning is commonly used to address the problem of the prohibitive need in annotated data when one want to classify visual content with a Convolutional Neural Network (CNN). We address the problem of the universality of the CNN-based representation of images in such a context. The state-of-the-art consists in diversifying the source problem on which the CNN is learned. It reduces the cost for the target problem but still requires a large amount of efforts to satisfy the source problem needs in annotated data. We propose an unified framework of the methods that improve the universality by diversifying the source problem. We also propose two methods that improve the universality but pay special attention to limit the need of annotated data. Finally, we propose a new evaluation protocol to compare the ability of CNN-based representation to tackle the problem of universality. It demonstrates the interest of our work on 10 publicly available benchmarks, relating to a variety of visual classification problems.

    Learning More Universal Representations for Transfer-Learning
    by Youssef Tamaazousti, Hervé Le Borgne, Céline Hudelot, Mohamed El Amine Seddik, Mohamed Tamaazousti
    https://arxiv.org/pdf/1712.09708v1.pdf

You must be logged in to reply to this topic.