Machine Learning

Deep Episodic Memory: Encoding, Recalling, and Predicting Episodic Experiences for Robot Action Execution

Tagged: ,

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 11 months ago.


  • arXiv
    5 pts

    Deep Episodic Memory: Encoding, Recalling, and Predicting Episodic Experiences for Robot Action Execution

    We present a novel deep neural network architecture for representing robot experiences in an episodic-like memory which facilitates encoding, recalling, and predicting action experiences. Our proposed unsupervised deep episodic memory model 1) encodes observed actions in a latent vector space and, based on this latent encoding, 2) infers action categories, 3) reconstructs original frames, and 4) predicts future frames. We evaluate the proposed model on two different large-scale action datasets. Results show that conceptually similar actions are mapped into the same region of the latent vector space. Results show that conceptual similarity of videos is reflected by the proximity of their vector representations in the latent space.Based on this contribution, we introduce an action matching and retrieval mechanism and evaluate its performance and generalization capability on a real humanoid robot in an action execution scenario.

    Deep Episodic Memory: Encoding, Recalling, and Predicting Episodic Experiences for Robot Action Execution
    by Jonas Rothfuss, Fabio Ferreira, Eren Erdal Aksoy, You Zhou, Tamim Asfour
    https://arxiv.org/pdf/1801.04134v1.pdf

You must be logged in to reply to this topic.