Machine Learning

Recurrent Network-based Deterministic Policy Gradient for Solving Bipedal Walking Challenge on Rugged Terrains

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 1 year, 3 months ago.


  • arXiv
    5 pts

    Recurrent Network-based Deterministic Policy Gradient for Solving Bipedal Walking Challenge on Rugged Terrains

    This paper presents the learning algorithm based on the Recurrent Network-based Deterministic Policy Gradient. The Long-Short Term Memory is utilized to enable the Partially Observed Markov Decision Process framework. The novelty are improvements of LSTM networks: update of multi-step temporal difference, removal of backpropagation through time on actor, initialisation of hidden state using past trajectory scanning, and injection of external experiences learned by other agents. Our methods benefit the reinforcement learning agent on inferring the desirable action by referring the trajectories of both past observations and actions. The proposed algorithm was implemented to solve the Bipedal-Walker challenge in OpenAI virtual environment where only partial state information is available. The validation on the extremely rugged terrain demonstrates the effectiveness of the proposed algorithm by achieving a new record of highest rewards in the challenge. The autonomous behaviors generated by our agent are highly adaptive to a variety of obstacles as shown in the simulation results.

    Recurrent Network-based Deterministic Policy Gradient for Solving Bipedal Walking Challenge on Rugged Terrains
    by Doo Re Song, Chuanyu Yang, Christopher McGreavy, Zhibin Li
    https://arxiv.org/pdf/1710.02896v1.pdf

You must be logged in to reply to this topic.