Machine Learning

Improved Training for Self-Training

Tagged: , ,

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 1 year, 5 months ago.


  • arXiv
    5 pts

    Improved Training for Self-Training

    It is well known that for some tasks, labeled data sets may be hard to gather. Therefore, we wished to tackle here the problem of having insufficient training data. We examined learning methods from unlabeled data after an initial training on a limited labeled data set. The suggested approach can be used as an online learning method on the unlabeled test set. In the general classification task, whenever we predict a label with high enough confidence, we treat it as a true label and train the data accordingly. For the semantic segmentation task, a classic example for an expensive data labeling process, we do so pixel-wise. Our suggested approaches were applied on the MNIST data-set as a proof of concept for a vision classification task and on the ADE20K data-set in order to tackle the semi-supervised semantic segmentation problem.

    Improved Training for Self-Training
    by Gal Hyams, Daniel Greenfeld, Dor Bank
    https://arxiv.org/pdf/1710.00209v1.pdf

You must be logged in to reply to this topic.