Machine Learning

Gradient Regularization Improves Accuracy of Discriminative Models

Tagged: , ,

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 11 months, 3 weeks ago.


  • arXiv
    5 pts

    Gradient Regularization Improves Accuracy of Discriminative Models

    Regularizing the gradient norm of the output of a neural network with respect to its inputs is a powerful technique, first proposed by Drucker & LeCun (1991) who named it Double Backpropagation. The idea has been independently rediscovered several times since then, most often with the goal of making models robust against adversarial sampling. This paper presents evidence that gradient regularization can consistently and significantly improve classification accuracy on vision tasks, especially when the amount of training data is small. We introduce our regularizers as members of a broader class of Jacobian-based regularizers, and compare them theoretically and empirically. A straightforward objection against minimizing the gradient norm at the training points is that a locally optimal solution, where the model has small gradients at the training points, may possibly contain large changes at other regions. We demonstrate through experiments on real and synthetic tasks that stochastic gradient descent is unable to find these locally optimal but globally unproductive solutions. Instead, it is forced to find solutions that generalize well.

    Gradient Regularization Improves Accuracy of Discriminative Models
    by Dániel Varga, Adrián Csiszárik, Zsolt Zombori
    https://arxiv.org/pdf/1712.09936v1.pdf

You must be logged in to reply to this topic.