Machine Learning

Self-supervised Deep Reinforcement Learning with Generalized Computation Graphs for Robot Navigation

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 1 year, 3 months ago.


  • arXiv
    5 pts

    Self-supervised Deep Reinforcement Learning with Generalized Computation Graphs for Robot Navigation

    Enabling robots to autonomously navigate complex environments is essential for real-world deployment. Prior methods approach this problem by having the robot maintain an internal map of the world, and then use a localization and planning method to navigate through the internal map. However, these approaches often include a variety of assumptions, are computationally intensive, and do not learn from failures. In contrast, learning-based methods improve as the robot acts in the environment, but are difficult to deploy in the real-world due to their high sample complexity. To address the need to learn complex policies with few samples, we propose a generalized computation graph that subsumes value-based model-free methods and model-based methods, with specific instantiations interpolating between model-free and model-based. We then instantiate this graph to form a navigation model that learns from raw images and is sample efficient. Our simulated car experiments explore the design decisions of our navigation model, and show our approach outperforms single-step and $N$-step double Q-learning. We also evaluate our approach on a real-world RC car and show it can learn to navigate through a complex indoor environment with a few hours of fully autonomous, self-supervised training. Videos of the experiments and code can be found at github.com/gkahn13/gcg

    Self-supervised Deep Reinforcement Learning with Generalized Computation Graphs for Robot Navigation
    by Gregory Kahn, Adam Villaflor, Bosen Ding, Pieter Abbeel, Sergey Levine
    https://arxiv.org/pdf/1709.10489v1.pdf

You must be logged in to reply to this topic.