Topic Tag: DNN

home Forums Topic Tag: DNN

 Eye-Movement behavior identification for AD diagnosis

 

In the present work, we develop a deep-learning approach for differentiating the eye-movement behavior of people with neurodegenerative diseases over healthy control subjects during reading well-defined sentences. We define an information compaction of the eye-tracking data of subjects without and …


 Sparsity-based Defense against Adversarial Attacks on Linear Classifiers

    

Deep neural networks represent the state of the art in machine learning in a growing number of fields, including vision, speech and natural language processing. However, recent work raises important questions about the robustness of such architectures, by showing that it is possible to induce class…


 Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks

Machine learning systems based on deep neural networks, being able to produce state-of-the-art results on various perception tasks, have gained mainstream adoption in many applications. However, they are shown to be vulnerable to adversarial example attack, which generates malicious output by addin…


 Faster Deep Q-learning using Neural Episodic Control

  

The Research on deep reinforcement learning to estimate Q-value by deep learning has been active in recent years. In deep reinforcement learning, it is important to efficiently learn the experiences that a agent has collected by exploring the environment. In this research, we propose NEC2DQN that i…


 Deep Reinforcement Learning of Cell Movement in the Early Stage of C. elegans Embryogenesis

 

Cell movement in the early phase of C. elegans development is regulated by a highly complex process in which a set of rules and connections are formulated at distinct scales. Previous efforts have demonstrated that agent-based, multi-scale modeling systems can integrate physical and biological rule…


 Deep Learning in Finance

We explore the use of deep learning hierarchical models for problems in financial prediction and classification. Financial prediction problems — such as those presented in designing and pricing securities, constructing portfolios, and risk management — often involve large data sets with…


 Evaluation of Machine Learning Fameworks on Finis Terrae II

 

Machine Learning (ML) and Deep Learning (DL) are two technologies used to extract representations of the data for a specific purpose. ML algorithms take a set of data as input to generate one or several predictions. To define the final version of one model, usually there is an initial step devoted …


 High Dimensional Spaces, Deep Learning and Adversarial Examples

 

In this paper, we analyze deep learning from a mathematical point of view and derive several novel results. The results are based on intriguing mathematical properties of high dimensional spaces. We first look at perturbation based adversarial examples and show how they can be understood using topo…


 Characterizing Types of Convolution in Deep Convolutional Recurrent Neural Networks for Robust Speech Emotion Recognition

    

Deep convolutional neural networks are being actively investigated in a wide range of speech and audio processing applications including speech recognition, audio event detection and computational paralinguistics, owing to their ability to reduce factors of variations, for learning from speech. How…


 Cost-Sensitive Convolution based Neural Networks for Imbalanced Time-Series Classification

 

Some deep convolutional neural networks were proposed for time-series classification and class imbalanced problems. However, those models performed degraded and even failed to recognize the minority class of an imbalanced temporal sequences dataset. Minority samples would bring troubles for tempora…


 SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks

Going deeper and wider in neural architectures improves the accuracy, while the limited GPU DRAM places an undesired restriction on the network design domain. Deep Learning (DL) practitioners either need change to less desired network architectures, or nontrivially dissect a network across multiGPU…


 Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers

 

Although various techniques have been proposed to generate adversarial samples for white-box attacks on text, little attention has been paid to a black-box attack, which is a more realistic scenario. In this paper, we present a novel algorithm, DeepWordBug, to effectively generate small text pertur…


 Conditional Probability Models for Deep Image Compression

 

Deep Neural Networks trained as image auto-encoders have recently emerged as a promising direction for advancing the state of the art in image compression. The key challenge in learning such networks is twofold: to deal with quantization, and to control the trade-off between reconstruction error (d…


 Deep Learning for Sampling from Arbitrary Probability Distributions

This paper proposes a fully connected neural network model to map samples from a uniform distribution to samples of any explicitly known probability density function. During the training, the Jensen-Shannon divergence between the distribution of the model’s output and the target distribution …


 Arhuaco: Deep Learning and Isolation Based Security for Distributed High-Throughput Computing

  

Grid computing systems require innovative methods and tools to identify cybersecurity incidents and perform autonomous actions i.e. without administrator intervention. They also require methods to isolate and trace job payload activity in order to protect users and find evidence of malicious behavi…


 Deep Episodic Memory: Encoding, Recalling, and Predicting Episodic Experiences for Robot Action Execution

We present a novel deep neural network architecture for representing robot experiences in an episodic-like memory which facilitates encoding, recalling, and predicting action experiences. Our proposed unsupervised deep episodic memory model 1) encodes observed actions in a latent vector space and, …


 Deep learning is a good steganalysis tool when embedding key is reused for different images, even if there is a cover source-mismatch

  

Since the BOSS competition, in 2010, most steganalysis approaches use a learning methodology involving two steps: feature extraction, such as the Rich Models (RM), for the image representation, and use of the Ensemble Classifier (EC) for the learning step. In 2015, Qian et al. have shown that the u…


 Generalizing Hamiltonian Monte Carlo with Neural Networks

We present a general-purpose method to train Markov chain Monte Carlo kernels, parameterized by deep neural networks, that converge and mix quickly to their target distribution. Our method generalizes Hamiltonian Monte Carlo and is trained to maximize expected squared jumped distance, a proxy for m…


 A guide to convolution arithmetic for deep learning

 

We introduce a guide to help deep learning practitioners understand and manipulate convolutional neural network architectures. The guide clarifies the relationship between various properties (input shape, kernel shape, zero padding, strides and output shape) of convolutional, pooling and transposed…


 Australia’s long-term electricity demand forecasting using deep neural networks

 

Accurate prediction of long-term electricity demand has a significant role in demand side management and electricity network planning and operation. Demand over-estimation results in over-investment in network assets, driving up the electricity prices, while demand under-estimation may lead to unde…


 Australia’s long-term electricity demand forecasting using deep neural networks

 

Accurate prediction of long-term electricity demand has a significant role in demand side management and electricity network planning and operation. Demand over-estimation results in over-investment in network assets, driving up the electricity prices, while demand under-estimation may lead to unde…


 Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks

In an effort to understand the meaning of the intermediate representations captured by deep networks, recent papers have tried to associate specific semantic concepts to individual neural network filter responses, where interesting correlations are often found, largely by focusing on extremal filte…


 Near Maximum Likelihood Decoding with Deep Learning

A novel and efficient neural decoder algorithm is proposed. The proposed decoder is based on the neural Belief Propagation algorithm and the Automorphism Group. By combining neural belief propagation with permutations from the Automorphism Group we achieve near maximum likelihood performance for Hi…


 Bounding and Counting Linear Regions of Deep Neural Networks

In this paper, we study the representational power of deep neural networks (DNN) that belong to the family of piecewise-linear (PWL) functions, based on PWL activation units such as rectifier or maxout. We investigate the complexity of such networks by studying the number of linear regions of the P…


 Design Exploration of Hybrid CMOS-OxRAM Deep Generative Architectures

   

Deep Learning and its applications have gained tremendous interest recently in both academia and industry. Restricted Boltzmann Machines (RBMs) offer a key methodology to implement deep learning paradigms. This paper presents a novel approach for realizing hybrid CMOS-OxRAM based deep generative mo…