ES Is More Than Just a Traditional FiniteDifference Approximator
This topic contains 0 replies, has 1 voice, and was last updated by arXiv 1 year, 2 months ago.

ES Is More Than Just a Traditional FiniteDifference Approximator
An evolution strategy (ES) variant recently attracted significant attention due to its surprisingly good performance at optimizing neural networks in challenging deep reinforcement learning domains. It searches directly in the parameter space of neural networks by generating perturbations to the current set of parameters, checking their performance, and moving in the direction of higher reward. The resemblance of this algorithm to a traditional finitedifference approximation of the reward gradient in parameter space naturally leads to the assumption that it is just that. However, this assumption is incorrect. The aim of this paper is to definitively demonstrate this point empirically. ES is a gradient approximator, but optimizes for a different gradient than just reward (especially when the magnitude of candidate perturbations is high). Instead, it optimizes for the average reward of the entire population, often also promoting parameters that are robust to perturbation. This difference can channel ES into significantly different areas of the search space than gradient descent in parameter space, and also consequently to networks with significantly different properties. This unique robustnessseeking property, and its consequences for optimization, are demonstrated in several domains. They include humanoid locomotion, where networks from policy gradientbased reinforcement learning are far less robust to parameter perturbation than ESbased policies that solve the same task. While the implications of such robustness and robustnessseeking remain open to further study, the main contribution of this work is to highlight that such differences indeed exist and deserve attention.
ES Is More Than Just a Traditional FiniteDifference Approximator
by Joel Lehman, Jay Chen, Jeff Clune, Kenneth O. Stanley
https://arxiv.org/pdf/1712.06568v1.pdf
You must be logged in to reply to this topic.