Learning in the Machine: Random Backpropagation and the Deep Learning Channel
This topic contains 0 replies, has 1 voice, and was last updated by arXiv 1 year ago.

Learning in the Machine: Random Backpropagation and the Deep Learning Channel
Random backpropagation (RBP) is a variant of the backpropagation algorithm for training neural networks, where the transpose of the forward matrices are replaced by fixed random matrices in the calculation of the weight updates. It is remarkable both because of its effectiveness, in spite of using random matrices to communicate error information, and because it completely removes the taxing requirement of maintaining symmetric weights in a physical neural system. To better understand random backpropagation, we first connect it to the notions of local learning and learning channels. Through this connection, we derive several alternatives to RBP, including skipped RBP (SRPB), adaptive RBP (ARBP), sparse RBP, and their combinations (e.g. ASRBP) and analyze their computational complexity. We then study their behavior through simulations using the MNIST and CIFAR10 bechnmark datasets. These simulations show that most of these variants work robustly, almost as well as backpropagation, and that multiplication by the derivatives of the activation functions is important. As a followup, we study also the lowend of the number of bits required to communicate error information over the learning channel. We then provide partial intuitive explanations for some of the remarkable properties of RBP and its variations. Finally, we prove several mathematical results, including the convergence to fixed points of linear chains of arbitrary length, the convergence to fixed points of linear autoencoders with decorrelated data, the longterm existence of solutions for linear systems with a single hidden layer and convergence in special cases, and the convergence to fixed points of nonlinear chains, when the derivative of the activation functions is included.
Learning in the Machine: Random Backpropagation and the Deep Learning Channel
by Pierre Baldi, Peter Sadowski, Zhiqin Lu
https://arxiv.org/pdf/1612.02734v2.pdf
You must be logged in to reply to this topic.