Topic Tag: RNN

home Forums Topic Tag: RNN

 Predicting Movie Genres Based on Plot Summaries

  

This project explores several Machine Learning methods to predict movie genres based on plot summaries. Naive Bayes, Word2Vec+XGBoost and Recurrent Neural Networks are used for text classification, while K-binary transformation, rank method and probabilistic classification with learned probability …


 Frame-Recurrent Video Super-Resolution

    

Recent advances in video super-resolution have shown that convolutional neural networks combined with motion compensation are able to merge information from multiple low-resolution (LR) frames to generate high-quality images. Current state-of-the-art methods process a batch of LR frames to generate…


 Long-term Blood Pressure Prediction with Deep Recurrent Neural Networks

 

Existing methods for arterial blood pressure (BP) estimation directly map the input physiological signals to output BP values without explicitly modeling the underlying temporal dependencies in BP dynamics. As a result, these models suffer from accuracy decay over a long time and thus require frequ…


 A Gentle Tutorial of Recurrent Neural Network with Error Backpropagation

    

We describe recurrent neural networks (RNNs), which have attracted great attention on sequential tasks, such as handwriting recognition, speech recognition and image to text. However, compared to general feedforward neural networks, RNNs have feedback loops, which makes it a little hard to understa…


 Multivariate LSTM-FCNs for Time Series Classification

Over the past decade, multivariate time series classification has been receiving a lot of attention. We propose augmenting the existing univariate time series classification models, LSTM-FCN and ALSTM-FCN with a squeeze and excitation block to further improve performance. Our proposed models outper…


 Characterizing Types of Convolution in Deep Convolutional Recurrent Neural Networks for Robust Speech Emotion Recognition

    

Deep convolutional neural networks are being actively investigated in a wide range of speech and audio processing applications including speech recognition, audio event detection and computational paralinguistics, owing to their ability to reduce factors of variations, for learning from speech. How…


 Combining Symbolic and Function Evaluation Expressions In Neural Programs

 

Neural programming involves training neural networks to learn programs from data. Previous works have failed to achieve good generalization performance, especially on programs with high complexity or on large domains. This is because they mostly rely either on black-box function evaluations that do…


 Predicting Future Lane Changes of Other Highway Vehicles using RNN-based Deep Models

 

In the event of sensor failure, it is necessary for autonomous vehicles to safely execute emergency maneuvers while avoiding other vehicles on the road. In order to accomplish this, the sensor-failed vehicle must predict the future semantic behaviors of other drivers, such as lane changes, as well …


 Arhuaco: Deep Learning and Isolation Based Security for Distributed High-Throughput Computing

  

Grid computing systems require innovative methods and tools to identify cybersecurity incidents and perform autonomous actions i.e. without administrator intervention. They also require methods to isolate and trace job payload activity in order to protect users and find evidence of malicious behavi…


 Neural Program Synthesis with Priority Queue Training

   

We consider the task of program synthesis in the presence of a reward function over the output of programs, where the goal is to find programs with maximal rewards. We employ an iterative optimization scheme, where we train an RNN on a dataset of K best programs from a priority queue of the generat…


 Recent Advances in Recurrent Neural Networks

Recurrent neural networks (RNNs) are capable of learning features and long term dependencies from sequential and time-series data. The RNNs have a stack of non-linear units where at least one connection between units forms a directed cycle. A well-trained RNN can model any dynamical system; however…


 Exploring Asymmetric Encoder-Decoder Structure for Context-based Sentence Representation Learning

  

Context information plays an important role in human language understanding, and it is also useful for machines to learn vector representations of language. In this paper, we explore an asymmetric encoder-decoder structure for unsupervised context-based sentence representation learning. As a result…


 Connecting Software Metrics across Versions to Predict Defects

Accurate software defect prediction could help software practitioners allocate test resources to defect-prone modules effectively and efficiently. In the last decades, much effort has been devoted to build accurate defect prediction models, including developing quality defect predictors and modelin…


 Topic Compositional Neural Language Model

  

We propose a Topic Compositional Neural Language Model (TCNLM), a novel method designed to simultaneously capture both the global semantic meaning and the local word ordering structure in a document. The TCNLM learns the global semantic coherence of a document via a neural topic model, and the prob…


 Deep Architectures for Automated Seizure Detection in Scalp EEGs

   

Automated seizure detection using clinical electroencephalograms is a challenging machine learning problem because the multichannel signal often has an extremely low signal to noise ratio. Events of interest such as seizures are easily confused with signal artifacts (e.g, eye movements) or benign v…


 PixelSNAIL: An Improved Autoregressive Generative Model

    

Autoregressive generative models consistently achieve the best results in density estimation tasks involving high dimensional data, such as images or audio. They pose density estimation as a sequence modeling task, where a recurrent neural network (RNN) models the conditional distribution over the …


 CNN Is All You Need

       

The Convolution Neural Network (CNN) has demonstrated the unique advantage in audio, image and text learning; recently it has also challenged Recurrent Neural Networks (RNNs) with long short-term memory cells (LSTM) in sequence-to-sequence learning, since the computations involved in CNN are easily…


 DeepIEP: a Peptide Sequence Model of Isoelectric Point (IEP/pI) using Recurrent Neural Networks (RNNs)

 

The isoelectric point (IEP or pI) is the pH where the net charge on the molecular ensemble of peptides and proteins is zero. This physical-chemical property is dependent on protonable/deprotonable sidechains and their pKa values. Here an pI prediction model is trained from a database of peptide seq…


 Chaos-guided Input Structuring for Improved Learning in Recurrent Neural Networks

Anatomical studies demonstrate that brain reformats input information to generate reliable responses for performing computations. However, it remains unclear how neural circuits encode complex spatio-temporal patterns. We show that neural dynamics are strongly influenced by the phase alignment betw…


 CSGNet: Neural Shape Parser for Constructive Solid Geometry

 

We present a neural architecture that takes as input a 2D or 3D shape and induces a program to generate it. The in- structions in our program are based on constructive solid geometry principles, i.e., a set of boolean operations on shape primitives defined recursively. Bottom-up techniques for this…


 Sentence Ordering and Coherence Modeling using Recurrent Neural Networks

  

Modeling the structure of coherent texts is a key NLP problem. The task of coherently organizing a given set of sentences has been commonly used to build and evaluate models that understand such structure. We propose an end-to-end unsupervised deep learning approach based on the set-to-sequence fra…


 A Deep Policy Inference Q-Network for Multi-Agent Systems

   

We present DPIQN, a deep policy inference Q-network that targets multi-agent systems composed of controllable agents, collaborators, and opponents that interact with each other. We focus on one challenging issue in such systems—modeling agents with varying strategies—and propose to empl…


 Dataflow Matrix Machines and V-values: a Bridge between Programs and Neural Nets

Dataflow matrix machines generalize neural nets by replacing streams of numbers with streams of vectors (or other kinds of linear streams admitting a notion of linear combination of several streams) and adding a few more changes on top of that, namely arbitrary input and output arities for activati…


 Meta-Learning and Universality: Deep Representations and Gradient Descent can Approximate any Learning Algorithm

 

Learning to learn is a powerful paradigm for enabling models to learn from data more effectively and efficiently. A popular approach to meta-learning is to train a recurrent model to read in a training dataset as input and output the parameters of a learned model, or output predictions for new test…


 Multi-shot Pedestrian Re-identification via Sequential Decision Making

   

Multi-shot pedestrian re-identification problem is at the core of surveillance video analysis. It matches two tracks of pedestrians from different cameras. In contrary to existing works that aggregate single frames features by time series model such as recurrent neural network, in this paper, we pr…