Topic Tag: language

home Forums Topic Tag: language

 How neural networks think

General-purpose technique sheds light on inner workings of neural nets trained to process language. How neural networks think by Larry Hardesty | MIT News Office


 Deconvolutional Paragraph Representation Learning

   

Learning latent representations from long text sequences is an important first step in many natural language processing applications. Recurrent Neural Networks (RNNs) have become a cornerstone for this challenging task. However, the quality of sentences during RNN-based decoding (reconstruction) de…


 Training RNNs as Fast as CNNs

   

Recurrent neural networks scale poorly due to the intrinsic difficulty in parallelizing their state computations. For instance, the forward pass computation of $h_t$ is blocked until the entire computation of $h_{t-1}$ finishes, which is a major bottleneck for parallel computing. In this work, we p…


 A Deep Reinforcement Learning Chatbot

   

We present MILABOT: a deep reinforcement learning chatbot developed by the Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize competition. MILABOT is capable of conversing with humans on popular small talk topics through both speech and text. The system consists of an ense…


 Self-Normalizing Neural Networks

     

Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow a…


 Towards Neural Machine Translation with Latent Tree Attention

   

Building models that take advantage of the hierarchical structure of language without a priori annotation is a longstanding goal in natural language processing. We introduce such a model for the task of machine translation, pairing a recurrent neural network grammar encoder with a novel attentional…


 The Pragmatics of Indirect Commands in Collaborative Discourse

 

Today’s artificial assistants are typically prompted to perform tasks through direct, imperative commands such as emph{Set a timer} or emph{Pick up the box}. However, to progress toward more natural exchanges between humans and these assistants, it is important to understand the way non-imper…


 Enriching Linked Datasets with New Object Properties

 

Although several RDF knowledge bases are available through the LOD initiative, the ontology schema of such linked datasets is not very rich. In particular, they lack object properties. The problem of finding new object properties (and their instances) between any two given classes has not been inve…


 Understanding the Logical and Semantic Structure of Large Documents

 

Current language understanding approaches focus on small documents, such as newswire articles, blog posts, product reviews and discussion forum entries. Understanding and extracting information from large documents like legal briefs, proposals, technical manuals and research articles is still a cha…


 Deep Reinforcement Learning: An Overview

      

We give an overview of recent exciting achievements of deep reinforcement learning (RL). We discuss six core elements, six important mechanisms, and twelve applications. We start with background of machine learning, deep learning and reinforcement learning. Next we discuss core RL elements, includi…


 Grasping the Finer Point: A Supervised Similarity Network for Metaphor Detection

 

The ubiquity of metaphor in our everyday communication makes it an important problem for natural language understanding. Yet, the majority of metaphor processing systems to date rely on hand-engineered features and there is still no consensus in the field as to which features are optimal for this t…


 Modelling Protagonist Goals and Desires in First-Person Narrative

   

Many genres of natural language text are narratively structured, a testament to our predilection for organizing our experiences as narratives. There is broad consensus that understanding a narrative requires identifying and tracking the goals and desires of the characters and their narrative outcom…


 Gradual Learning of Deep Recurrent Neural Networks

  

Deep Recurrent Neural Networks (RNNs) achieve state-of-the-art results in many sequence-to-sequence tasks. However, deep RNNs are difficult to train and suffer from overfitting. We introduce a training method that trains the network gradually, and treats each layer individually, to achieve improved…


 Overcoming Language Variation in Sentiment Analysis with Social Attention

Variation in language is ubiquitous, particularly in newer forms of writing such as social media. Fortunately, variation is not random, it is often linked to social properties of the author. In this paper, we show how to exploit social networks to make sentiment analysis more robust to social langu…


 Data Science 101 (Getting started in NLP): Tokenization tutorial

  

One common task in NLP (Natural Language Processing) is tokenization. “Tokens” are usually individual words (at least in languages like English) and “tokenization” is taking a text or set of text and breaking it up into its individual words. These tokens are then used as the…


 What is the Role of Recurrent Neural Networks (RNNs) in an Image Caption Generator?

   

In neural image captioning systems, a recurrent neural network (RNN) is typically viewed as the primary generation' component. This view suggests that the image features should beinjected’ into the RNN. This is in fact the dominant view in the literature. Alternatively, the RNN can inste…


 Phonetic Temporal Neural Model for Language Identification

  

Deep neural models, particularly the LSTM-RNN model, have shown great potential for language identification (LID). However, the use of phonetic information has been largely overlooked by most existing neural LID methods, although this information has been used very successfully in conventional phon…


 Neural Networks Compression for Language Modeling

  

In this paper, we consider several compression techniques for the language modeling problem based on recurrent neural networks (RNNs). It is known that conventional RNNs, e.g, LSTM-based networks in language modeling, are characterized with either high space complexity or substantial inference time…