Topic Tag: DNN

home Forums Topic Tag: DNN

 HDR image reconstruction from a single exposure using deep CNNs

  

Camera sensors can only capture a limited range of luminance simultaneously, and in order to create high dynamic range (HDR) images a set of different exposures are typically combined. In this paper we address the problem of predicting information that have been lost in saturated image areas, in or…


 Deep Learning for Tumor Classification in Imaging Mass Spectrometry

Motivation: Tumor classification using Imaging Mass Spectrometry (IMS) data has a high potential for future applications in pathology. Due to the complexity and size of the data, automated feature extraction and classification steps are required to fully process the data. Deep learning offers an ap…


 Unified Backpropagation for Multi-Objective Deep Learning

  

A common practice in most of deep convolutional neural architectures is to employ fully-connected layers followed by Softmax activation to minimize cross-entropy loss for the sake of classification. Recent studies show that substitution or addition of the Softmax objective to the cost functions of …


 Distributed Deep Transfer Learning by Basic Probability Assignment

Transfer learning is a popular practice in deep neural networks, but fine-tuning of large number of parameters is a hard task due to the complex wiring of neurons between splitting layers and imbalance distributions of data in pretrained and transferred domains. The reconstruction of the original w…


 Building effective deep neural network architectures one feature at a time

 

Successful training of convolutional neural networks is often associated with sufficiently deep architectures composed of high amounts of features. These networks typically rely on a variety of regularization and pruning techniques to converge to less redundant states. We introduce a novel bottom-u…


 Scalable Gaussian Processes with Billions of Inducing Inputs via Tensor Train Decomposition

    

We propose a method (TT-GP) for approximate inference in Gaussian Process (GP) models. We build on previous scalable GP research including stochastic variational inference based on inducing inputs, kernel interpolation, and structure exploiting algebra. The key idea of our method is to use Tensor T…


 Quantum-assisted learning of hardware-embedded probabilistic graphical models

Mainstream machine learning techniques such as deep learning and probabilistic programming rely heavily on sampling from generally intractable probability distributions. There is increasing interest in the potential advantages of using quantum computing technologies as sampling engines to speed up …


 Recent Advances in Convolutional Neural Networks

    

In the last few years, deep learning has led to very good performance on a variety of problems, such as visual recognition, speech recognition and natural language processing. Among different types of deep neural networks, convolutional neural networks have been most extensively studied. Leveraging…


 Data-Free Knowledge Distillation for Deep Neural Networks

Recent advances in model compression have provided procedures for compressing large neural networks to a fraction of their original size while retaining most if not all of their accuracy. However, all of these approaches rely on access to the original training set, which might not always be possibl…


 Deep Extreme Multi-label Learning

Extreme multi-label learning (XML) or classification has been a practical and important problem since the boom of big data. The main challenge lies in the exponential label space which involves $2^L$ possible label sets when the label dimension $L$ is very large, e.g., in millions for Wikipedia lab…


 Meta-Learning via Feature-Label Memory Network

 

Deep learning typically requires training a very capable architecture using large datasets. However, many important learning problems demand an ability to draw valid inferences from small size datasets, and such problems pose a particular challenge for deep learning. In this regard, various researc…


 Machine Learning as Statistical Data Assimilation

We identify a strong equivalence between neural network based machine learning (ML) methods and the formulation of statistical data assimilation (DA), known to be a problem in statistical physics. DA, as used widely in physical and biological sciences, systematically transfers information in observ…


 Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data

One of the defining properties of deep learning is that models are chosen to have many more parameters than available training data. In light of this capacity for overfitting, it is remarkable that simple algorithms like SGD reliably return solutions with low test error. One roadblock to explaining…


 Asynchronous Decentralized Parallel Stochastic Gradient Descent

 

Recent work shows that decentralized parallel stochastic gradient decent (D-PSGD) can outperform its centralized counterpart both theoretically and practically. While asynchronous parallelism is a powerful technology to improve the efficiency of parallelism in distributed machine learning platforms…


 Characterization of Gradient Dominance and Regularity Conditions for Neural Networks

 

The past decade has witnessed a successful application of deep learning to solving many challenging problems in machine learning and artificial intelligence. However, the loss functions of deep neural networks (especially nonlinear networks) are still far from being well understood from a theoretic…


 Photo-Guided Exploration of Volume Data Features

 

In this work, we pose the question of whether, by considering qualitative information such as a sample target image as input, one can produce a rendered image of scientific data that is similar to the target. The algorithm resulting from our research allows one to ask the question of whether featur…


 DNN and CNN with Weighted and Multi-task Loss Functions for Audio Event Detection

  

This report presents our audio event detection system submitted for Task 2, “Detection of rare sound events”, of DCASE 2017 challenge. The proposed system is based on convolutional neural networks (CNNs) and deep neural networks (DNNs) coupled with novel weighted and multi-task loss fun…


 Stochastic Weighted Function Norm Regularization

 

Deep neural networks (DNNs) have become increasingly important due to their excellent empirical performance on a wide range of problems. However, regularization is generally achieved by indirect means, largely due to the complex set of functions defined by a network and the difficulty in measuring …


 Calibrated Boosting-Forest

 

Excellent ranking power along with well calibrated probability estimates are needed in many classification tasks. In this paper, we introduce a technique, Calibrated Boosting-Forest that captures both. This novel technique is an ensemble of gradient boosting machines that can support both continuou…


 Domain Randomization and Generative Models for Robotic Grasping

Deep learning-based robotic grasping has made significant progress the past several years thanks to algorithmic improvements and increased data availability. However, state-of-the-art models are often trained on as few as hundreds or thousands of unique object instances, and as a result generalizat…


 Multi-task Domain Adaptation for Deep Learning of Instance Grasping from Simulation

 

Learning-based approaches to robotic manipulation are limited by the scalability of data collection and accessibility of labels. In this paper, we present a multi-task domain adaptation framework for instance grasping in cluttered scenes by utilizing simulated robot experiments. Our neural network …


 Feature versus Raw Sequence: Deep Learning Comparative Study on Predicting Pre-miRNA

Should we input known genome sequence features or input sequence itself in deep learning framework? As deep learning more popular in various applications, researchers often come to question whether to generate features or use raw sequences for deep learning. To answer this question, we study the pr…


 Spontaneous Symmetry Breaking in Neural Networks

 

We propose a framework to understand the unprecedented performance and robustness of deep neural networks using field theory. Correlations between the weights within the same layer can be described by symmetries in that layer, and networks generalize better if such symmetries are broken to reduce t…


 Intention-Net: Integrating Planning and Deep Learning for Goal-Directed Autonomous Navigation

 

How can a delivery robot navigate reliably to a destination in a new office building, with minimal prior information? To tackle this challenge, this paper introduces a two-level hierarchical approach, which integrates model-free deep learning and model-based path planning. At the low level, a neura…


 Generalization in Deep Learning

  

This paper explains why deep learning can generalize well, despite large capacity and possible algorithmic instability, nonrobustness, and sharp minima, effectively addressing an open problem in the literature. Based on our theoretical insight, this paper also proposes a family of new regularizatio…