Machine Learning

Bayesian Alignments of Warped Multi-Output Gaussian Processes

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 1 year, 9 months ago.


  • arXiv
    5 pts

    Bayesian Alignments of Warped Multi-Output Gaussian Processes

    We present a Bayesian extension to convolution processes which defines a representation between multiple functions by an embedding in a shared latent space. The proposed model allows for both arbitrary alignments of the inputs and and also non-parametric output warpings to transform the observations. This gives rise to multiple deep Gaussian process models connected via latent generating processes. We derive an efficient variational approximation based on nested variational compression and show how the model can be used to extract shared information between dependent time series, recovering an interpretable functional decomposition of the learning problem.

    Bayesian Alignments of Warped Multi-Output Gaussian Processes
    by Markus Kaiser, Clemens Otte, Thomas Runkler, Carl Henrik Ek
    https://arxiv.org/pdf/1710.02766v1.pdf

You must be logged in to reply to this topic.