Machine Learning

Non-Parametric Transformation Networks

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 1 year ago.


  • arXiv
    5 pts

    Non-Parametric Transformation Networks

    ConvNets have been very effective in many applications where it is required to learn invariances to within-class nuisance transformations. However, through their architecture, ConvNets only enforce invariance to translation. In this paper, we introduce a new class of convolutional architectures called Non-Parametric Transformation Networks (NPTNs) which can learn general invariances and symmetries directly from data. NPTNs are a direct and natural generalization of ConvNets and can be optimized directly using gradient descent. They make no assumption regarding structure of the invariances present in the data and in that aspect are very flexible and powerful. We also model ConvNets and NPTNs under a unified framework called Transformation Networks which establishes the natural connection between the two. We demonstrate the efficacy of NPTNs on natural data such as MNIST and CIFAR 10 where it outperforms ConvNet baselines with the same number of parameters. We show it is effective in learning invariances unknown apriori directly from data from scratch. Finally, we apply NPTNs to Capsule Networks and show that they enable them to perform even better.

    Non-Parametric Transformation Networks
    by Dipan K. Pal, Marios Savvides
    https://arxiv.org/pdf/1801.04520v1.pdf

You must be logged in to reply to this topic.