Machine Learning

Recursive Binary Neural Network Learning Model with 2.28b/Weight Storage Requirement

Tagged: , ,

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 2 years, 2 months ago.


  • arXiv
    5 pts

    Recursive Binary Neural Network Learning Model with 2.28b/Weight Storage Requirement

    This paper presents a storage-efficient learning model titled Recursive Binary Neural Networks for sensing devices having a limited amount of on-chip data storage such as < 100's kilo-Bytes. The main idea of the proposed model is to recursively recycle data storage of synaptic weights (parameters) during training. This enables a device with a given storage constraint to train and instantiate a neural network classifier with a larger number of weights on a chip and with a less number of off-chip storage accesses. This enables higher classification accuracy, shorter training time, less energy dissipation, and less on-chip storage requirement. We verified the training model with deep neural network classifiers and the permutation-invariant MNIST benchmark. Our model uses only 2.28 bits/weight while for the same data storage constraint achieving ~1% lower classification error as compared to the conventional binary-weight learning model which yet has to use 8 to 16 bit storage per weight. To achieve the similar classification error, the conventional binary model requires ~4x more data storage for weights than the proposed model.

    Recursive Binary Neural Network Learning Model with 2.28b/Weight Storage Requirement
    by Tianchan Guan, Xiaoyang Zeng, Mingoo Seok
    https://arxiv.org/pdf/1709.05306v1.pdf

You must be logged in to reply to this topic.