Machine Learning

Unifying PAC and Regret: Uniform PAC Bounds for Episodic Reinforcement Learning

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 1 year, 10 months ago.


  • arXiv
    5 pts

    Unifying PAC and Regret: Uniform PAC Bounds for Episodic Reinforcement Learning

    Statistical performance bounds for reinforcement learning (RL) algorithms can be critical for high-stakes applications like healthcare. This paper introduces a new framework for theoretically measuring the performance of such algorithms called Uniform-PAC, which is a strengthening of the classical Probably Approximately Correct (PAC) framework. In contrast to the PAC framework, the uniform version may be used to derive high probability regret guarantees and so forms a bridge between the two setups that has been missing in the literature. We demonstrate the benefits of the new framework for finite-state episodic MDPs with a new algorithm that is Uniform-PAC and simultaneously achieves optimal regret and PAC guarantees except for a factor of the horizon.

    Unifying PAC and Regret: Uniform PAC Bounds for Episodic Reinforcement Learning
    by Christoph Dann, Tor Lattimore, Emma Brunskill
    https://arxiv.org/pdf/1703.07710v2.pdf

You must be logged in to reply to this topic.