Topic Tag: Generative Adversarial Network

home Forums Topic Tag: Generative Adversarial Network

 Unsupervised Cipher Cracking Using Discrete GANs

 

This work details CipherGAN, an architecture inspired by CycleGAN used for inferring the underlying cipher mapping given banks of unpaired ciphertext and plaintext. We demonstrate that CipherGAN is capable of cracking language data enciphered using shift and Vigenere ciphers to a high degree of fid…


 Gradient descent GAN optimization is locally stable

  

Despite the growing prominence of generative adversarial networks (GANs), optimization in GANs is still a poorly understood topic. In this paper, we analyze the “gradient descent” form of GAN optimization i.e., the natural setting where we simultaneously take small gradient steps in bot…


 On the convergence properties of GAN training

 

Recent work has shown local convergence of GAN training for absolutely continuous data and generator distributions. In this note we show that the requirement of absolute continuity is necessary: we describe a simple yet prototypical counterexample showing that in the more realistic case of distribu…


 Demystifying MMD GANs

 

We investigate the training and performance of generative adversarial networks using the Maximum Mean Discrepancy (MMD) as critic, termed MMD GANs. As our main theoretical contribution, we clarify the situation with bias in GAN loss functions raised by recent work: we show that gradient estimators …


 GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium

   

Generative Adversarial Networks (GANs) excel at creating realistic images with complex models for which maximum likelihood is infeasible. However, the convergence of GAN training has still not been proved. We propose a two time-scale update rule (TTUR) for training GANs with stochastic gradient des…


 Comparative Study on Generative Adversarial Networks

 

In recent years, there have been tremendous advancements in the field of machine learning. These advancements have been made through both academic as well as industrial research. Lately, a fair amount of research has been dedicated to the usage of generative models in the field of computer vision a…


 Generating Multi-label Discrete Patient Records using Generative Adversarial Networks

  

Access to electronic health record (EHR) data has motivated computational advances in medical research. However, various concerns, particularly over privacy, can limit access to and collaborative use of EHR data. Sharing synthetic EHR data could mitigate risk. In this paper, we propose a new approa…


 TFGAN: A Lightweight Library for Generative Adversarial Networks

     

Posted by Joel Shor, Senior Software Engineer, Machine Perception (Crossposted on the Google Open Source Blog) Training a neural network usually involves defining a loss function, which tells the network how close or far it is from its objective. For example, image classification networks are often…


 Topic Compositional Neural Language Model

  

We propose a Topic Compositional Neural Language Model (TCNLM), a novel method designed to simultaneously capture both the global semantic meaning and the local word ordering structure in a document. The TCNLM learns the global semantic coherence of a document via a neural topic model, and the prob…


 Wasserstein Distributional Robustness and Regularization in Statistical Learning

 

A central question in statistical learning is to design algorithms that not only perform well on training data, but also generalize to new and unseen data. In this paper, we tackle this question by formulating a distributionally robust stochastic optimization (DRSO) problem, which seeks a solution …


 Online Monotone Games

  

Algorithmic game theory (AGT) focuses on the design and analysis of algorithms for interacting agents, with interactions rigorously formalized within the framework of games. Results from AGT find applications in domains such as online bidding auctions for web advertisements and network routing prot…


 Replacement AutoEncoder: A Privacy-Preserving Algorithm for Sensory Data Analysis

  

An increasing number of sensors on mobile, Internet of things (IoT), and wearable devices generate time-series measurements of physical activities. Though access to the sensory data is critical to the success of many beneficial applications such as health monitoring or activity recognition, a wide …


 Mixed Precision Training

  

Deep neural networks have enabled progress in a wide variety of applications. Growing the size of the neural network typically results in improved accuracy. As model sizes grow, the memory and compute requirements for training these models also increases. We introduce a technique to train deep neur…


 DeepMasterPrint: Generating Fingerprints for Presentation Attacks

  

We present two related methods for creating MasterPrints, synthetic fingerprints that are capable of spoofing multiple people’s fingerprints. These methods achieve results that advance the state-of-the-art for single MasterPrint attack accuracy while being the first methods capable of creatin…


 Unsupervised Image-to-Image Translation Networks

 

Unsupervised image-to-image translation aims at learning a joint distribution of images in different domains by using images from the marginal distributions in individual domains. Since there exists an infinite set of joint distributions that can arrive the given marginal distributions, one could i…


 Improving image generative models with human interactions

 

GANs provide a framework for training generative models which mimic a data distribution. However, in many cases we wish to train these generative models to optimize some auxiliary objective function within the data it generates, such as making more aesthetically pleasing images. In some cases, thes…


 Generative Adversarial Mapping Networks

   

Generative Adversarial Networks (GANs) have shown impressive performance in generating photo-realistic images. They fit generative models by minimizing certain distance measure between the real image distribution and the generated data distribution. Several distance measures have been used, such as…


 On the regularization of Wasserstein GANs

 

Since their invention, generative adversarial networks (GANs) have become a popular approach for learning to model a distribution of real (unlabeled) data. Convergence problems during training are overcome by Wasserstein GANs which minimize the distance between the model and the empirical distribut…


 Long Text Generation via Adversarial Training with Leaked Information

     

Automatically generating coherent and semantically meaningful text has many applications in machine translation, dialogue systems, image captioning, etc. Recently, by combining with policy gradient, Generative Adversarial Nets (GAN) that use a discriminative model to guide the training of the gener…


 Statistical Parametric Speech Synthesis Incorporating Generative Adversarial Networks

  

A method for statistical parametric speech synthesis incorporating generative adversarial networks (GANs) is proposed. Although powerful deep neural networks (DNNs) techniques can be applied to artificially synthesize speech waveform, the synthetic speech quality is low compared with that of natura…


 Hierarchical Detail Enhancing Mesh-Based Shape Generation with 3D Generative Adversarial Network

 

Automatic mesh-based shape generation is of great interest across a wide range of disciplines, from industrial design to gaming, computer graphics and various other forms of digital art. While most traditional methods focus on primitive based model generation, advances in deep learning made it poss…


 MMGAN: Manifold Matching Generative Adversarial Network

 

Generative adversarial networks (GANs) are considered as a totally different type of generative models. However, it is well known that GANs are very hard to train. There have been proposed many different techniques in order to stabilize their training procedures. In this paper, we propose a novel t…


 Class-Splitting Generative Adversarial Networks

  

Generative Adversarial Networks (GANs) produce systematically better quality samples when class label information is provided., i.e. in the conditional GAN setup. This is still observed for the recently proposed Wasserstein GAN formulation which stabilized adversarial training and allows considerin…


 Triangle Generative Adversarial Networks

 

A Triangle Generative Adversarial Network ($Delta$-GAN) is developed for semi-supervised cross-domain joint distribution matching, where the training data consists of samples from each domain, and supervision of domain correspondence is provided by only a few paired samples. $Delta$-GAN consists of…


 Summable Reparameterizations of Wasserstein Critics in the One-Dimensional Setting

Generative adversarial networks (GANs) are an exciting alternative to algorithms for solving density estimation problems—using data to assess how likely samples are to be drawn from the same distribution. Instead of explicitly computing these probabilities, GANs learn a generator that can mat…