Machine Learning

Deep Learning for Sampling from Arbitrary Probability Distributions

Tagged: ,

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 11 months ago.


  • arXiv
    5 pts

    Deep Learning for Sampling from Arbitrary Probability Distributions

    This paper proposes a fully connected neural network model to map samples from a uniform distribution to samples of any explicitly known probability density function. During the training, the Jensen-Shannon divergence between the distribution of the model’s output and the target distribution is minimized. We experimentally demonstrate that our model converges towards the desired state. It provides an alternative to existing sampling methods such as inversion sampling, rejection sampling, Gaussian mixture models and Markov-Chain-Monte-Carlo. Our model has high sampling efficiency and is easily applied to any probability distribution, without the need of further analytical or numerical calculations. It can produce correlated samples, such that the output distribution converges faster towards the target than for independent samples. But it is also able to produce independent samples, if single values are fed into the network and the input values are independent as well. We focus on one-dimensional sampling, but additionally illustrate a two-dimensional example with a target distribution of dependent variables.

    Deep Learning for Sampling from Arbitrary Probability Distributions
    by Felix Horger, Tobias Würfl, Vincent Christlein, Andreas Maier
    https://arxiv.org/pdf/1801.04211v1.pdf

You must be logged in to reply to this topic.