Machine Learning

Safe Mutations for Deep and Recurrent Neural Networks through Output Gradients

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 12 months ago.


  • arXiv
    5 pts

    Safe Mutations for Deep and Recurrent Neural Networks through Output Gradients

    While neuroevolution (evolving neural networks) has a successful track record across a variety of domains from reinforcement learning to artificial life, it is rarely applied to large, deep neural networks. A central reason is that while random mutation generally works in low dimensions, a random perturbation of thousands or millions of weights is likely to break existing functionality, providing no learning signal even if some individual weight changes were beneficial. This paper proposes a solution by introducing a family of safe mutation (SM) operators that aim within the mutation operator itself to find a degree of change that does not alter network behavior too much, but still facilitates exploration. Importantly, these SM operators do not require any additional interactions with the environment. The most effective SM variant capitalizes on the intriguing opportunity to scale the degree of mutation of each individual weight according to the sensitivity of the network’s outputs to that weight, which requires computing the gradient of outputs with respect to the weights (instead of the gradient of error, as in conventional deep learning). This safe mutation through gradients (SM-G) operator dramatically increases the ability of a simple genetic algorithm-based neuroevolution method to find solutions in high-dimensional domains that require deep and/or recurrent neural networks (which tend to be particularly brittle to mutation), including domains that require processing raw pixels. By improving our ability to evolve deep neural networks, this new safer approach to mutation expands the scope of domains amenable to neuroevolution.

    Safe Mutations for Deep and Recurrent Neural Networks through Output Gradients
    by Joel Lehman, Jay Chen, Jeff Clune, Kenneth O. Stanley
    https://arxiv.org/pdf/1712.06563v1.pdf

You must be logged in to reply to this topic.