Recursive Binary Neural Network Learning Model with 2.28b/Weight Storage Requirement
This topic contains 0 replies, has 1 voice, and was last updated by arXiv 2 years, 2 months ago.

Recursive Binary Neural Network Learning Model with 2.28b/Weight Storage Requirement
This paper presents a storageefficient learning model titled Recursive Binary Neural Networks for sensing devices having a limited amount of onchip data storage such as < 100's kiloBytes. The main idea of the proposed model is to recursively recycle data storage of synaptic weights (parameters) during training. This enables a device with a given storage constraint to train and instantiate a neural network classifier with a larger number of weights on a chip and with a less number of offchip storage accesses. This enables higher classification accuracy, shorter training time, less energy dissipation, and less onchip storage requirement. We verified the training model with deep neural network classifiers and the permutationinvariant MNIST benchmark. Our model uses only 2.28 bits/weight while for the same data storage constraint achieving ~1% lower classification error as compared to the conventional binaryweight learning model which yet has to use 8 to 16 bit storage per weight. To achieve the similar classification error, the conventional binary model requires ~4x more data storage for weights than the proposed model.
Recursive Binary Neural Network Learning Model with 2.28b/Weight Storage Requirement
by Tianchan Guan, Xiaoyang Zeng, Mingoo Seok
https://arxiv.org/pdf/1709.05306v1.pdf
You must be logged in to reply to this topic.