Machine Learning

Deep Abstract Q-Networks

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 1 year, 2 months ago.


  • arXiv
    5 pts

    Deep Abstract Q-Networks

    We examine the problem of learning and planning on high-dimensional domains with long horizons and sparse rewards. Recent approaches have shown great successes in many Atari 2600 domains. However, domains with long horizons and sparse rewards, such as Montezuma’s Revenge and Venture, remain challenging for existing methods. Methods using abstraction (Dietterich 2000; Sutton, Precup, and Singh 1999) have shown to be useful in tackling long-horizon problems. We combine recent techniques of deep reinforcement learning with existing model-based approaches using an expert-provided state abstraction. We construct toy domains that elucidate the problem of long horizons, sparse rewards and high-dimensional inputs, and show that our algorithm significantly outperforms previous methods on these domains. Our abstraction-based approach outperforms Deep Q-Networks (Mnih et al. 2015) on Montezuma’s Revenge and Venture, and exhibits backtracking behavior that is absent from previous methods.

    Deep Abstract Q-Networks
    by Melrose Roderick, Christopher Grimm, Stefanie Tellex
    https://arxiv.org/pdf/1710.00459v1.pdf

You must be logged in to reply to this topic.