Topic Tag: language

home Forums Topic Tag: language

 Neural Sketch Learning for Conditional Program Generation

We study the problem of generating source code in a strongly typed, Java-like programming language, given a label (for example a set of API calls or types) carrying a small amount of information about the code that is desired. The generated programs are expected to respect a “realistic”…


 Unsupervised Cipher Cracking Using Discrete GANs

 

This work details CipherGAN, an architecture inspired by CycleGAN used for inferring the underlying cipher mapping given banks of unpaired ciphertext and plaintext. We demonstrate that CipherGAN is capable of cracking language data enciphered using shift and Vigenere ciphers to a high degree of fid…


 Sparsity-based Defense against Adversarial Attacks on Linear Classifiers

    

Deep neural networks represent the state of the art in machine learning in a growing number of fields, including vision, speech and natural language processing. However, recent work raises important questions about the robustness of such architectures, by showing that it is possible to induce class…


 EARL: Joint Entity and Relation Linking for Question Answering over Knowledge Graphs

In order to answer natural language questions over knowledge graphs, most processing pipelines involve entity and relation linking. Traditionally, entity linking and relation linking has been performed either as dependent sequential tasks or independent parallel tasks. In this paper, we propose a f…


 Conversational AI: The Science Behind the Alexa Prize

  

Conversational agents are exploding in popularity. However, much work remains in the area of social conversation as well as free-form conversation over a broad range of domains and topics. To advance the state of the art in conversational AI, Amazon launched the Alexa Prize, a 2.5-million-dollar un…


 Neural Program Synthesis with Priority Queue Training

   

We consider the task of program synthesis in the presence of a reward function over the output of programs, where the goal is to find programs with maximal rewards. We employ an iterative optimization scheme, where we train an RNN on a dataset of K best programs from a priority queue of the generat…


 Sentence Object Notation: Multilingual sentence notation based on Wordnet

 

The representation of sentences is a very important task. It can be used as a way to exchange data inter-applications. One main characteristic, that a notation must have, is a minimal size and a representative form. This can reduce the transfer time, and hopefully the processing time as well. Usual…


 Exploring Asymmetric Encoder-Decoder Structure for Context-based Sentence Representation Learning

  

Context information plays an important role in human language understanding, and it is also useful for machines to learn vector representations of language. In this paper, we explore an asymmetric encoder-decoder structure for unsupervised context-based sentence representation learning. As a result…


 Learning Rapid-Temporal Adaptations

 

A hallmark of human intelligence and cognition is its flexibility. One of the long-standing goals in AI research is to replicate this flexibility in a learning machine. In this work we describe a mechanism by which artificial neural networks can learn rapid-temporal adaptation – the ability t…


 Topic Compositional Neural Language Model

  

We propose a Topic Compositional Neural Language Model (TCNLM), a novel method designed to simultaneously capture both the global semantic meaning and the local word ordering structure in a document. The TCNLM learns the global semantic coherence of a document via a neural topic model, and the prob…


 Combining Representation Learning with Logic for Language Processing

The current state-of-the-art in many natural language processing and automated knowledge base completion tasks is held by representation learning methods which learn distributed vector representations of symbols via gradient-based optimization. They require little or no hand-crafted features, thus …


 Letter-Based Speech Recognition with Gated ConvNets

  

In this paper we introduce a new speech recognition system, leveraging a simple letter-based ConvNet acoustic model. The acoustic model requires — only audio transcription for training — no alignment annotations, nor any forced alignment step is needed. At inference, our decoder takes o…


 Evaluation of Speech for the Google Assistant

     

Posted by Enrique Alfonseca, Staff Research Scientist, Google Assistant Voice interactions with technology are becoming a key part of our lives — from asking your phone for traffic conditions to work to using a smart device at home to turn on the lights or play music. The Google Assistant is desi…


 Cognitive Database: A Step towards Endowing Relational Databases with Artificial Intelligence Capabilities

 

We propose Cognitive Databases, an approach for transparently enabling Artificial Intelligence (AI) capabilities in relational databases. A novel aspect of our design is to first view the structured data source as meaningful unstructured text, and then use the text to build an unsupervised neural n…


 Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation

  

This paper surveys the current state of the art in Natural Language Generation (NLG), defined as the task of generating text or speech from non-linguistic input. A survey of NLG is timely in view of the changes that the field has undergone over the past decade or so, especially in relation to new (…


 Tacotron 2: Generating Human-like Speech from Text

     

Posted by Jonathan Shen and Ruoming Pang, Software Engineers, on behalf of the Google Brain and Machine Perception Teams Generating very natural sounding speech from text (text-to-speech, TTS) has been a research goal for decades. There has been great progress in TTS research over the last few year…


 The NarrativeQA Reading Comprehension Challenge

Reading comprehension (RC)—in contrast to information retrieval—requires integrating information and reasoning about events, entities, and their relations across a full document. Question answering is conventionally used to assess RC ability, in both artificial agents and children learn…


 DarkRank: Accelerating Deep Metric Learning via Cross Sample Similarities Transfer

   

We have witnessed rapid evolution of deep neural network architecture design in the past years. These latest progresses greatly facilitate the developments in various areas such as computer vision and natural language processing. However, along with the extraordinary performance, these state-of-the…


 Reading a neural network’s mind

Technique illuminates the inner workings of artificial-intelligence systems that process language. Reading a neural network’s mind by Larry Hardesty | MIT News Office


 Visual reasoning and dialog: Towards natural language conversations about visual data

The broad objective of visual dialog research is to teach machines to have natural language conversations with humans about visual […] Visual reasoning and dialog: Towards natural language conversations about visual data by Kelly Berschauer


 Spoken Language Biomarkers for Detecting Cognitive Impairment

   

In this study we developed an automated system that evaluates speech and language features from audio recordings of neuropsychological examinations of 92 subjects in the Framingham Heart Study. A total of 265 features were used in an elastic-net regularized binomial logistic regression model to cla…


 Progressive Joint Modeling in Unsupervised Single-channel Overlapped Speech Recognition

 

Unsupervised single-channel overlapped speech recognition is one of the hardest problems in automatic speech recognition (ASR). Permutation invariant training (PIT) is a state of the art model-based approach, which applies a single neural network to solve this single-input, multiple-output modeling…


 Recent Advances in Convolutional Neural Networks

    

In the last few years, deep learning has led to very good performance on a variety of problems, such as visual recognition, speech recognition and natural language processing. Among different types of deep neural networks, convolutional neural networks have been most extensively studied. Leveraging…


 ProLanGO: Protein Function Prediction Using Neural~Machine Translation Based on a Recurrent Neural Network

 

With the development of next generation sequencing techniques, it is fast and cheap to determine protein sequences but relatively slow and expensive to extract useful information from protein sequences because of limitations of traditional biological experimental techniques. Protein function predic…


 Learning Differentially Private Language Models Without Losing Accuracy

   

We demonstrate that it is possible to train large recurrent language models with user-level differential privacy guarantees without sacrificing predictive accuracy. Our work builds on recent advances in the training of deep networks on user-partitioned data and privacy accounting for stochastic gra…