Machine Learning

The Uncertainty Bellman Equation and Exploration

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 1 year, 3 months ago.


  • arXiv
    5 pts

    The Uncertainty Bellman Equation and Exploration

    We consider the exploration/exploitation problem in reinforcement learning. For exploitation, it is well known that the Bellman equation connects the value at any time-step to the expected value at subsequent time-steps. In this paper we consider a similar uncertainty Bellman equation (UBE), which connects the uncertainty at any time-step to the expected uncertainties at subsequent time-steps, thereby extending the potential exploratory benefit of a policy beyond individual time-steps. We prove that the unique fixed point of the UBE yields an upper bound on the variance of the estimated value of any fixed policy. This bound can be much tighter than traditional count-based bonuses that compound standard deviation rather than variance. Importantly, and unlike several existing approaches to optimism, this method scales naturally to large systems with complex generalization. Substituting our UBE-exploration strategy for $epsilon$-greedy improves DQN performance on 51 out of 57 games in the Atari suite.

    The Uncertainty Bellman Equation and Exploration
    by Brendan O’Donoghue, Ian Osband, Remi Munos, Volodymyr Mnih
    https://arxiv.org/pdf/1709.05380v1.pdf

You must be logged in to reply to this topic.