Machine Learning

Learning Latent Representations for Speech Generation and Transformation

Tagged: ,

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 1 year, 8 months ago.


  • arXiv
    5 pts

    Learning Latent Representations for Speech Generation and Transformation

    An ability to model a generative process and learn a latent representation for speech in an unsupervised fashion will be crucial to process vast quantities of unlabelled speech data. Recently, deep probabilistic generative models such as Variational Autoencoders (VAEs) have achieved tremendous success in modeling natural images. In this paper, we apply a convolutional VAE to model the generative process of natural speech. We derive latent space arithmetic operations to disentangle learned latent representations. We demonstrate the capability of our model to modify the phonetic content or the speaker identity for speech segments using the derived operations, without the need for parallel supervisory data.

    Learning Latent Representations for Speech Generation and Transformation
    by Wei-Ning Hsu, Yu Zhang, James Glass
    https://arxiv.org/pdf/1704.04222v2.pdf

You must be logged in to reply to this topic.