The Uncertainty Bellman Equation and Exploration
This topic contains 0 replies, has 1 voice, and was last updated by arXiv 1 year, 7 months ago.

The Uncertainty Bellman Equation and Exploration
We consider the exploration/exploitation problem in reinforcement learning. For exploitation, it is well known that the Bellman equation connects the value at any timestep to the expected value at subsequent timesteps. In this paper we consider a similar uncertainty Bellman equation (UBE), which connects the uncertainty at any timestep to the expected uncertainties at subsequent timesteps, thereby extending the potential exploratory benefit of a policy beyond individual timesteps. We prove that the unique fixed point of the UBE yields an upper bound on the variance of the estimated value of any fixed policy. This bound can be much tighter than traditional countbased bonuses that compound standard deviation rather than variance. Importantly, and unlike several existing approaches to optimism, this method scales naturally to large systems with complex generalization. Substituting our UBEexploration strategy for $epsilon$greedy improves DQN performance on 51 out of 57 games in the Atari suite.
The Uncertainty Bellman Equation and Exploration
by Brendan O’Donoghue, Ian Osband, Remi Munos, Volodymyr Mnih
https://arxiv.org/pdf/1709.05380v1.pdf
You must be logged in to reply to this topic.