Latent Constraints: Learning to Generate Conditionally from Unconditional Generative Models
This topic contains 0 replies, has 1 voice, and was last updated by arXiv 1 year, 1 month ago.

Latent Constraints: Learning to Generate Conditionally from Unconditional Generative Models
Deep generative neural networks have proven effective at both conditional and unconditional modeling of complex data distributions. Conditional generation enables interactive control, but creating new controls often requires expensive retraining. In this paper, we develop a method to condition generation without retraining the model. By posthoc learning latent constraints, value functions that identify regions in latent space that generate outputs with desired attributes, we can conditionally sample from these regions with gradientbased optimization or amortized actor functions. Combining attribute constraints with a universal “realism” constraint, which enforces similarity to the data distribution, we generate realistic conditional images from an unconditional variational autoencoder. Further, using gradientbased optimization, we demonstrate identitypreserving transformations that make the minimal adjustment in latent space to modify the attributes of an image. Finally, with discrete sequences of musical notes, we demonstrate zeroshot conditional generation, learning latent constraints in the absence of labeled data or a differentiable reward function. Code with dedicated cloud instance has been made publicly available (https://goo.gl/STGMGx).
Latent Constraints: Learning to Generate Conditionally from Unconditional Generative Models
by Jesse Engel, Matthew Hoffman, Adam Roberts
https://arxiv.org/pdf/1711.05772v2.pdf
You must be logged in to reply to this topic.