Understanding Generalization and Stochastic Gradient Descent
This topic contains 0 replies, has 1 voice, and was last updated by arXiv 1 year, 10 months ago.

Understanding Generalization and Stochastic Gradient Descent
This paper tackles two related questions at the heart of machine learning; how can we predict if a minimum will generalize to the test set, and why does stochastic gradient descent find minima that generalize well? Our work is inspired by Zhang et al. (2017), who showed deep networks can easily memorize randomly labeled training data, despite generalizing well when shown real labels of the same inputs. We show here that the same phenomenon occurs in small linear models. These observations are explained by evaluating the Bayesian evidence in favor of each model, which penalizes sharp minima. Next, we explore the “generalization gap” between small and large batch training, identifying an optimum batch size which maximizes the test set accuracy. Noise in the gradient updates is beneficial, driving the dynamics towards robust minima for which the evidence is large. Interpreting stochastic gradient descent as a stochastic differential equation, we predict the optimum batch size is proportional to both the learning rate and the size of the training set, and verify these predictions empirically.
Understanding Generalization and Stochastic Gradient Descent
by Samuel L. Smith, Quoc V. Le
https://arxiv.org/pdf/1710.06451v1.pdf
You must be logged in to reply to this topic.