Machine Learning

Crossmodal Attentive Skill Learner

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 1 year, 1 month ago.

  • arXiv
    5 pts

    Crossmodal Attentive Skill Learner

    This paper presents the Crossmodal Attentive Skill Learner (CASL), integrated with the recently-introduced Asynchronous Advantage Option-Critic (A2OC) architecture [Harb et al., 2017] to enable hierarchical reinforcement learning across multiple sensory inputs. We provide concrete examples where the approach not only improves performance in a single task, but accelerates transfer to new tasks. We demonstrate the attention mechanism anticipates and identifies useful latent features, while filtering irrelevant sensor modalities during execution. We modify the Arcade Learning Environment [Bellemare et al., 2013] to support audio queries, and conduct evaluations of crossmodal learning in the Atari 2600 game Amidar. Finally, building on the recent work of Babaeizadeh et al. [2017], we open-source a fast hybrid CPU-GPU implementation of CASL.

    Crossmodal Attentive Skill Learner
    by Shayegan Omidshafiei, Dong-Ki Kim, Jason Pazis, Jonathan P. How

You must be logged in to reply to this topic.