Topic Tag: IMAGENET

home Forums Topic Tag: IMAGENET

 Swish: a Self-Gated Activation Function

The choice of activation functions in deep networks has a significant effect on the training dynamics and task performance. Currently, the most successful and widely-used activation function is the Rectified Linear Unit (ReLU). Although various alternatives to ReLU have been proposed, none have man…


 A systematic study of the class imbalance problem in convolutional neural networks

    

In this study, we systematically investigate the impact of class imbalance on classification performance of convolutional neural networks (CNNs) and compare frequently used methods to address the issue. Class imbalance is a common problem that has been comprehensively studied in classical machine l…


 Energy-efficient Amortized Inference with Cascaded Deep Classifiers

  

Deep neural networks have been remarkable successful in various AI tasks but often cast high computation and energy cost for energy-constrained applications such as mobile sensing. We address this problem by proposing a novel framework that optimizes the prediction accuracy and energy cost simultan…


 Projection Based Weight Normalization for Deep Neural Networks

      

Optimizing deep neural networks (DNNs) often suffers from the ill-conditioned problem. We observe that the scaling-based weight space symmetry property in rectified nonlinear network will cause this negative effect. Therefore, we propose to constrain the incoming weights of each neuron to be unit-n…


 DeepXplore: Automated Whitebox Testing of Deep Learning Systems

  

Deep learning (DL) systems are increasingly deployed in safety- and security-critical domains including self-driving cars and malware detection, where the correctness and predictability of a system’s behavior for corner case inputs are of great importance. Existing DL testing depends heavily …


 Neural Optimizer Search with Reinforcement Learning

      

We present an approach to automate the process of discovering optimization methods, with a focus on deep learning architectures. We train a Recurrent Neural Network controller to generate a string in a domain specific language that describes a mathematical update equation based on a list of primiti…


 Adaptive Neural Networks for Efficient Inference

  

We present an approach to adaptively utilize deep neural networks in order to reduce the evaluation time on new examples without loss of accuracy. Rather than attempting to redesign or approximate existing networks, we propose two schemes that adaptively utilize networks. We first pose an adaptive …


 Deep Fruit Detection in Orchards

  

An accurate and reliable image based fruit detection system is critical for supporting higher level agriculture tasks such as yield mapping and robotic harvesting. This paper presents the use of a state-of-the-art object detection framework, Faster R-CNN, in the context of fruit detection in orchar…


 EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples

     

Recent studies have highlighted the vulnerability of deep neural networks (DNNs) to adversarial examples – a visually indistinguishable adversarial image can easily be crafted to cause a well-trained model to misclassify. Existing methods for crafting adversarial examples are based on $L_2$ a…


 Dual Discriminator Generative Adversarial Nets

    

We propose in this paper a novel approach to tackle the problem of mode collapse encountered in generative adversarial network (GAN). Our idea is intuitive but proven to be very effective, especially in addressing some key limitations of GAN. In essence, it combines the Kullback-Leibler (KL) and re…


 CuRTAIL: ChaRacterizing and Thwarting AdversarIal deep Learning

   

This paper proposes CuRTAIL, an end-to-end computing framework for characterizing and thwarting adversarial space in the context of Deep Learning (DL). The framework protects deep neural networks against adversarial samples, which are perturbed inputs carefully crafted by malicious entities to misl…


 Learned Optimizers that Scale and Generalize

   

Learning to learn has emerged as an important direction for achieving artificial intelligence. Two of the primary barriers to its adoption are an inability to scale to larger problems and a limited ability to generalize to new tasks. We introduce a learned gradient descent optimizer that generalize…


 Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights

 

This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods…


 DelugeNets: Deep Networks with Efficient and Flexible Cross-layer Information Inflows

  

Deluge Networks (DelugeNets) are deep neural networks which efficiently facilitate massive cross-layer information inflows from preceding layers to succeeding layers. The connections between layers in DelugeNets are established through cross-layer depthwise convolutional layers with learnable filte…