2,677 research outputs found
Non-Stationary Policy Learning for Multi-Timescale Multi-Agent Reinforcement Learning
In multi-timescale multi-agent reinforcement learning (MARL), agents interact
across different timescales. In general, policies for time-dependent behaviors,
such as those induced by multiple timescales, are non-stationary. Learning
non-stationary policies is challenging and typically requires sophisticated or
inefficient algorithms. Motivated by the prevalence of this control problem in
real-world complex systems, we introduce a simple framework for learning
non-stationary policies for multi-timescale MARL. Our approach uses available
information about agent timescales to define a periodic time encoding. In
detail, we theoretically demonstrate that the effects of non-stationarity
introduced by multiple timescales can be learned by a periodic multi-agent
policy. To learn such policies, we propose a policy gradient algorithm that
parameterizes the actor and critic with phase-functioned neural networks, which
provide an inductive bias for periodicity. The framework's ability to
effectively learn multi-timescale policies is validated on a gridworld and
building energy management environment.Comment: Accepted at IEEE CDC'23. 7 pages, 6 figure
The problem with DDPG: understanding failures in deterministic environments with sparse rewards
In environments with continuous state and action spaces, state-of-the-art
actor-critic reinforcement learning algorithms can solve very complex problems,
yet can also fail in environments that seem trivial, but the reason for such
failures is still poorly understood. In this paper, we contribute a formal
explanation of these failures in the particular case of sparse reward and
deterministic environments. First, using a very elementary control problem, we
illustrate that the learning process can get stuck into a fixed point
corresponding to a poor solution. Then, generalizing from the studied example,
we provide a detailed analysis of the underlying mechanisms which results in a
new understanding of one of the convergence regimes of these algorithms. The
resulting perspective casts a new light on already existing solutions to the
issues we have highlighted, and suggests other potential approaches.Comment: 19 pages, submitted to ICLR 202
- …