Federated Control with Hierarchical MultiAgent Deep Reinforcement Learning
This topic contains 0 replies, has 1 voice, and was last updated by arXiv 1 year, 1 month ago.

Federated Control with Hierarchical MultiAgent Deep Reinforcement Learning
We present a framework combining hierarchical and multiagent deep reinforcement learning approaches to solve coordination problems among a multitude of agents using a semidecentralized model. The framework extends the multiagent learning setup by introducing a metacontroller that guides the communication between agent pairs, enabling agents to focus on communicating with only one other agent at any step. This hierarchical decomposition of the task allows for efficient exploration to learn policies that identify globally optimal solutions even as the number of collaborating agents increases. We show promising initial experimental results on a simulated distributed scheduling problem.
Federated Control with Hierarchical MultiAgent Deep Reinforcement Learning
by Saurabh Kumar, Pararth Shah, Dilek HakkaniTur, Larry Heck
https://arxiv.org/pdf/1712.08266v1.pdf
You must be logged in to reply to this topic.