Machine Learning

Autonomous Quadrotor Landing using Deep Reinforcement Learning

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 1 year, 3 months ago.


  • arXiv
    5 pts

    Autonomous Quadrotor Landing using Deep Reinforcement Learning

    Landing an unmanned aerial vehicle on a ground marker is an open problem despite the effort of the research community. Previous attempts mostly focused on the analysis of hand-crafted geometric features and the use of external sensors in order to allow the vehicle to approach the land-pad. In this article, we propose a method based on deep reinforcement learning which only requires low-resolution images taken from a down-looking camera in order to identify the position of the marker and land the quadrotor on it. The proposed approach is based on a hierarchy of Deep Q-Networks (DQNs) which are used as high-level control policy for the navigation toward the marker. We implemented different technical solutions, such as the combination of vanilla and double DQNs trained using a form of prioritized buffer replay which separates experiences in multiple containers. Learning is achieved without any human supervision, giving to the agent an high-level feedback. The results show that the quadrotor can autonomously accomplish landing on a large variety of simulated environments and with relevant noise. In some conditions the DQN outperformed human pilots tested in the same environment.

    Autonomous Quadrotor Landing using Deep Reinforcement Learning
    by Riccardo Polvara, Massimiliano Patacchiola, Sanjay Sharma, Jian Wan, Andrew Manning, Robert Sutton, Angelo Cangelosi
    https://arxiv.org/pdf/1709.03339v1.pdf

You must be logged in to reply to this topic.