Selfsupervised Deep Reinforcement Learning with Generalized Computation Graphs for Robot Navigation
This topic contains 0 replies, has 1 voice, and was last updated by arXiv 1 year, 3 months ago.

Selfsupervised Deep Reinforcement Learning with Generalized Computation Graphs for Robot Navigation
Enabling robots to autonomously navigate complex environments is essential for realworld deployment. Prior methods approach this problem by having the robot maintain an internal map of the world, and then use a localization and planning method to navigate through the internal map. However, these approaches often include a variety of assumptions, are computationally intensive, and do not learn from failures. In contrast, learningbased methods improve as the robot acts in the environment, but are difficult to deploy in the realworld due to their high sample complexity. To address the need to learn complex policies with few samples, we propose a generalized computation graph that subsumes valuebased modelfree methods and modelbased methods, with specific instantiations interpolating between modelfree and modelbased. We then instantiate this graph to form a navigation model that learns from raw images and is sample efficient. Our simulated car experiments explore the design decisions of our navigation model, and show our approach outperforms singlestep and $N$step double Qlearning. We also evaluate our approach on a realworld RC car and show it can learn to navigate through a complex indoor environment with a few hours of fully autonomous, selfsupervised training. Videos of the experiments and code can be found at github.com/gkahn13/gcg
Selfsupervised Deep Reinforcement Learning with Generalized Computation Graphs for Robot Navigation
by Gregory Kahn, Adam Villaflor, Bosen Ding, Pieter Abbeel, Sergey Levine
https://arxiv.org/pdf/1709.10489v1.pdf
You must be logged in to reply to this topic.