Machine Learning

Latent Constraints: Learning to Generate Conditionally from Unconditional Generative Models

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 11 months, 4 weeks ago.


  • arXiv
    5 pts

    Latent Constraints: Learning to Generate Conditionally from Unconditional Generative Models

    Deep generative neural networks have proven effective at both conditional and unconditional modeling of complex data distributions. Conditional generation enables interactive control, but creating new controls often requires expensive retraining. In this paper, we develop a method to condition generation without retraining the model. By post-hoc learning latent constraints, value functions that identify regions in latent space that generate outputs with desired attributes, we can conditionally sample from these regions with gradient-based optimization or amortized actor functions. Combining attribute constraints with a universal “realism” constraint, which enforces similarity to the data distribution, we generate realistic conditional images from an unconditional variational autoencoder. Further, using gradient-based optimization, we demonstrate identity-preserving transformations that make the minimal adjustment in latent space to modify the attributes of an image. Finally, with discrete sequences of musical notes, we demonstrate zero-shot conditional generation, learning latent constraints in the absence of labeled data or a differentiable reward function. Code with dedicated cloud instance has been made publicly available (https://goo.gl/STGMGx).

    Latent Constraints: Learning to Generate Conditionally from Unconditional Generative Models
    by Jesse Engel, Matthew Hoffman, Adam Roberts
    https://arxiv.org/pdf/1711.05772v2.pdf

You must be logged in to reply to this topic.