Machine Learning

Generalizing Hamiltonian Monte Carlo with Neural Networks

Tagged: ,

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 11 months ago.


  • arXiv
    5 pts

    Generalizing Hamiltonian Monte Carlo with Neural Networks

    We present a general-purpose method to train Markov chain Monte Carlo kernels, parameterized by deep neural networks, that converge and mix quickly to their target distribution. Our method generalizes Hamiltonian Monte Carlo and is trained to maximize expected squared jumped distance, a proxy for mixing speed. We demonstrate large empirical gains on a collection of simple but challenging distributions, for instance achieving a 106x improvement in effective sample size in one case, and mixing when standard HMC makes no measurable progress in a second. Finally, we show quantitative and qualitative gains on a real-world task: latent-variable generative modeling. We release an open source TensorFlow implementation of the algorithm.

    Generalizing Hamiltonian Monte Carlo with Neural Networks
    by Daniel Levy, Matthew D. Hoffman, Jascha Sohl-Dickstein
    https://arxiv.org/pdf/1711.09268v2.pdf

You must be logged in to reply to this topic.