Mixed Precision Training
This topic contains 0 replies, has 1 voice, and was last updated by arXiv 1 year, 3 months ago.

Mixed Precision Training
Deep neural networks have enabled progress in a wide variety of applications. Growing the size of the neural network typically results in improved accuracy. As model sizes grow, the memory and compute requirements for training these models also increases. We introduce a technique to train deep neural networks using half precision floating point numbers. In our technique, weights, activations and gradients are stored in IEEE halfprecision format. Halfprecision floating numbers have limited numerical range compared to singleprecision numbers. We propose two techniques to handle this loss of information. Firstly, we recommend maintaining a singleprecision copy of the weights that accumulates the gradients after each optimizer step. This singleprecision copy is rounded to halfprecision format during training. Secondly, we propose scaling the loss appropriately to handle the loss of information with halfprecision gradients. We demonstrate that this approach works for a wide variety of models including convolution neural networks, recurrent neural networks and generative adversarial networks. This technique works for large scale models with more than 100 million parameters trained on large datasets. Using this approach, we can reduce the memory consumption of deep learning models by nearly 2x. In future processors, we can also expect a significant computation speedup using halfprecision hardware units.
Mixed Precision Training
by Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaev, Ganesh Venkatesh, Hao Wu
https://arxiv.org/pdf/1710.03740v1.pdf
You must be logged in to reply to this topic.