DataDriven Sparse Structure Selection for Deep Neural Networks
This topic contains 0 replies, has 1 voice, and was last updated by arXiv 12 months ago.

DataDriven Sparse Structure Selection for Deep Neural Networks
Deep convolutional neural networks have liberated its extraordinary power on various tasks. However, it is still very challenging to deploy stateoftheart models into realworld applications due to their high computational complexity. How can we design a compact and effective network without massive experiments and expert knowledge? In this paper, we propose a simple and effective framework to learn and prune deep models in an endtoend manner. In our framework, a new type of parameter — scaling factor is first introduced to scale the outputs of specific structures, such as neurons, groups or residual blocks. Then we add sparsity regularizations on these factors, and solve this optimization problem by a modified stochastic Accelerated Proximal Gradient (APG) method. By forcing some of the factors to zero, we can safely remove the corresponding structures, thus prune the unimportant parts of a CNN. Comparing with other structure selection methods that may need thousands of trials or iterative finetuning, our method is trained fully endtoend in one training pass without bells and whistles. We evaluate our method, Sparse Structure Selection with several stateoftheart CNNs, and demonstrate very promising results with adaptive depth and width selection.
DataDriven Sparse Structure Selection for Deep Neural Networks
by Zehao Huang, Naiyan Wang
https://arxiv.org/pdf/1707.01213v2.pdf
You must be logged in to reply to this topic.