Machine Learning

Two-Timescale Stochastic Approximation Convergence Rates with Applications to Reinforcement Learning

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 2 months, 2 weeks ago.


  • arXiv
    5 pts

    Two-Timescale Stochastic Approximation Convergence Rates with Applications to Reinforcement Learning

    Two-timescale Stochastic Approximation (SA) algorithms are widely used in Reinforcement Learning (RL). Their iterates have two parts that are updated with distinct stepsizes. In this work we provide a recipe for analyzing two-timescale SA. Using it, we develop the first convergence rate result for them. From this result we extract key insights on stepsize selection. As an application, we obtain convergence rates for two-timescale RL algorithms such as GTD(0), GTD2, and TDC.

    Two-Timescale Stochastic Approximation Convergence Rates with Applications to Reinforcement Learning
    by Gal Dalal, Balazs Szorenyi, Gugan Thoppe, Shie Mannor
    https://arxiv.org/pdf/1703.05376v3.pdf

You must be logged in to reply to this topic.