Machine Learning

Whatever Does Not Kill Deep Reinforcement Learning, Makes It Stronger

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 11 months, 3 weeks ago.


  • arXiv
    5 pts

    Whatever Does Not Kill Deep Reinforcement Learning, Makes It Stronger

    Recent developments have established the vulnerability of deep Reinforcement Learning (RL) to policy manipulation attacks via adversarial perturbations. In this paper, we investigate the robustness and resilience of deep RL to training-time and test-time attacks. Through experimental results, we demonstrate that under noncontiguous training-time attacks, Deep Q-Network (DQN) agents can recover and adapt to the adversarial conditions by reactively adjusting the policy. Our results also show that policies learned under adversarial perturbations are more robust to test-time attacks. Furthermore, we compare the performance of $epsilon$-greedy and parameter-space noise exploration methods in terms of robustness and resilience against adversarial perturbations.

    Whatever Does Not Kill Deep Reinforcement Learning, Makes It Stronger
    by Vahid Behzadan, Arslan Munir
    https://arxiv.org/pdf/1712.09344v1.pdf

You must be logged in to reply to this topic.