Machine Learning

Meta Inverse Reinforcement Learning via Maximum Reward Sharing for Human Motion Analysis`

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 2 years ago.


  • arXiv
    5 pts

    Meta Inverse Reinforcement Learning via Maximum Reward Sharing for Human Motion Analysis`

    This work handles the inverse reinforcement learning (IRL) problem where only a small number of demonstrations are available from a demonstrator for each high-dimensional task, insufficient toestimate an accurate reward function. Observing that each demonstrator has an inherent reward for each state and the task-specific behaviors mainly depend on a small number of key states, we propose a meta IRL algorithm that first models the reward function for each task as a distribution conditioned on a baseline reward function shared by all tasks and dependent only on the demonstrator, and then finds the most likely reward function in the distribution that explains the task-specific behaviors. We test the method in a simulated environment on path planning tasks with limited demonstrations, and show that the accuracy of the learned reward function is significantly improved. We also apply the method to analyze the motion of a patient under rehabilitation.

    Meta Inverse Reinforcement Learning via Maximum Reward Sharing for Human Motion Analysis`
    by Kun Li, Joel W. Burdick
    https://arxiv.org/pdf/1710.03592v1.pdf

You must be logged in to reply to this topic.