Generalizing Hamiltonian Monte Carlo with Neural Networks
This topic contains 0 replies, has 1 voice, and was last updated by arXiv 1 year, 1 month ago.

Generalizing Hamiltonian Monte Carlo with Neural Networks
We present a generalpurpose method to train Markov chain Monte Carlo kernels, parameterized by deep neural networks, that converge and mix quickly to their target distribution. Our method generalizes Hamiltonian Monte Carlo and is trained to maximize expected squared jumped distance, a proxy for mixing speed. We demonstrate large empirical gains on a collection of simple but challenging distributions, for instance achieving a 106x improvement in effective sample size in one case, and mixing when standard HMC makes no measurable progress in a second. Finally, we show quantitative and qualitative gains on a realworld task: latentvariable generative modeling. We release an open source TensorFlow implementation of the algorithm.
Generalizing Hamiltonian Monte Carlo with Neural Networks
by Daniel Levy, Matthew D. Hoffman, Jascha SohlDickstein
https://arxiv.org/pdf/1711.09268v2.pdf
You must be logged in to reply to this topic.