Machine Learning

Generalization Error Bounds for Noisy, Iterative Algorithms

Tagged: ,

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 1 year, 2 months ago.


  • arXiv
    5 pts

    Generalization Error Bounds for Noisy, Iterative Algorithms

    In statistical learning theory, generalization error is used to quantify the degree to which a supervised machine learning algorithm may overfit to training data. Recent work [Xu and Raginsky (2017)] has established a bound on the generalization error of empirical risk minimization based on the mutual information $I(S;W)$ between the algorithm input $S$ and the algorithm output $W$, when the loss function is sub-Gaussian. We leverage these results to derive generalization error bounds for a broad class of iterative algorithms that are characterized by bounded, noisy updates with Markovian structure. Our bounds are very general and are applicable to numerous settings of interest, including stochastic gradient Langevin dynamics (SGLD) and variants of the stochastic gradient Hamiltonian Monte Carlo (SGHMC) algorithm. Furthermore, our error bounds hold for any output function computed over the path of iterates, including the last iterate of the algorithm or the average of subsets of iterates, and also allow for non-uniform sampling of data in successive updates of the algorithm.

    Generalization Error Bounds for Noisy, Iterative Algorithms
    by Ankit Pensia, Varun Jog, Po-Ling Loh
    https://arxiv.org/pdf/1801.04295v1.pdf

You must be logged in to reply to this topic.