2,509 research outputs found
Provably Learning Nash Policies in Constrained Markov Potential Games
Multi-agent reinforcement learning (MARL) addresses sequential
decision-making problems with multiple agents, where each agent optimizes its
own objective. In many real-world instances, the agents may not only want to
optimize their objectives, but also ensure safe behavior. For example, in
traffic routing, each car (agent) aims to reach its destination quickly
(objective) while avoiding collisions (safety). Constrained Markov Games (CMGs)
are a natural formalism for safe MARL problems, though generally intractable.
In this work, we introduce and study Constrained Markov Potential Games
(CMPGs), an important class of CMGs. We first show that a Nash policy for CMPGs
can be found via constrained optimization. One tempting approach is to solve it
by Lagrangian-based primal-dual methods. As we show, in contrast to the
single-agent setting, however, CMPGs do not satisfy strong duality, rendering
such approaches inapplicable and potentially unsafe. To solve the CMPG problem,
we propose our algorithm Coordinate-Ascent for CMPGs (CA-CMPG), which provably
converges to a Nash policy in tabular, finite-horizon CMPGs. Furthermore, we
provide the first sample complexity bounds for learning Nash policies in
unknown CMPGs, and, which under additional assumptions, guarantee safe
exploration.Comment: 30 page
Reinforcement Learning Based on Real-Time Iteration NMPC
Reinforcement Learning (RL) has proven a stunning ability to learn optimal
policies from data without any prior knowledge on the process. The main
drawback of RL is that it is typically very difficult to guarantee stability
and safety. On the other hand, Nonlinear Model Predictive Control (NMPC) is an
advanced model-based control technique which does guarantee safety and
stability, but only yields optimality for the nominal model. Therefore, it has
been recently proposed to use NMPC as a function approximator within RL. While
the ability of this approach to yield good performance has been demonstrated,
the main drawback hindering its applicability is related to the computational
burden of NMPC, which has to be solved to full convergence. In practice,
however, computationally efficient algorithms such as the Real-Time Iteration
(RTI) scheme are deployed in order to return an approximate NMPC solution in
very short time. In this paper we bridge this gap by extending the existing
theoretical framework to also cover RL based on RTI NMPC. We demonstrate the
effectiveness of this new RL approach with a nontrivial example modeling a
challenging nonlinear system subject to stochastic perturbations with the
objective of optimizing an economic cost.Comment: accepted for the IFAC World Congress 202
Batch Policy Learning under Constraints
When learning policies for real-world domains, two important questions arise:
(i) how to efficiently use pre-collected off-policy, non-optimal behavior data;
and (ii) how to mediate among different competing objectives and constraints.
We thus study the problem of batch policy learning under multiple constraints,
and offer a systematic solution. We first propose a flexible meta-algorithm
that admits any batch reinforcement learning and online learning procedure as
subroutines. We then present a specific algorithmic instantiation and provide
performance guarantees for the main objective and all constraints. To certify
constraint satisfaction, we propose a new and simple method for off-policy
policy evaluation (OPE) and derive PAC-style bounds. Our algorithm achieves
strong empirical results in different domains, including in a challenging
problem of simulated car driving subject to multiple constraints such as lane
keeping and smooth driving. We also show experimentally that our OPE method
outperforms other popular OPE techniques on a standalone basis, especially in a
high-dimensional setting
Last-Iterate Convergent Policy Gradient Primal-Dual Methods for Constrained MDPs
We study the problem of computing an optimal policy of an infinite-horizon
discounted constrained Markov decision process (constrained MDP). Despite the
popularity of Lagrangian-based policy search methods used in practice, the
oscillation of policy iterates in these methods has not been fully understood,
bringing out issues such as violation of constraints and sensitivity to
hyper-parameters. To fill this gap, we employ the Lagrangian method to cast a
constrained MDP into a constrained saddle-point problem in which max/min
players correspond to primal/dual variables, respectively, and develop two
single-time-scale policy-based primal-dual algorithms with non-asymptotic
convergence of their policy iterates to an optimal constrained policy.
Specifically, we first propose a regularized policy gradient primal-dual
(RPG-PD) method that updates the policy using an entropy-regularized policy
gradient, and the dual via a quadratic-regularized gradient ascent,
simultaneously. We prove that the policy primal-dual iterates of RPG-PD
converge to a regularized saddle point with a sublinear rate, while the policy
iterates converge sublinearly to an optimal constrained policy. We further
instantiate RPG-PD in large state or action spaces by including function
approximation in policy parametrization, and establish similar sublinear
last-iterate policy convergence. Second, we propose an optimistic policy
gradient primal-dual (OPG-PD) method that employs the optimistic gradient
method to update primal/dual variables, simultaneously. We prove that the
policy primal-dual iterates of OPG-PD converge to a saddle point that contains
an optimal constrained policy, with a linear rate. To the best of our
knowledge, this work appears to be the first non-asymptotic policy last-iterate
convergence result for single-time-scale algorithms in constrained MDPs.Comment: 78 pages, 17 figures, and 1 tabl
Provably Efficient Generalized Lagrangian Policy Optimization for Safe Multi-Agent Reinforcement Learning
We examine online safe multi-agent reinforcement learning using constrained
Markov games in which agents compete by maximizing their expected total rewards
under a constraint on expected total utilities. Our focus is confined to an
episodic two-player zero-sum constrained Markov game with independent
transition functions that are unknown to agents, adversarial reward functions,
and stochastic utility functions. For such a Markov game, we employ an approach
based on the occupancy measure to formulate it as an online constrained
saddle-point problem with an explicit constraint. We extend the Lagrange
multiplier method in constrained optimization to handle the constraint by
creating a generalized Lagrangian with minimax decision primal variables and a
dual variable. Next, we develop an upper confidence reinforcement learning
algorithm to solve this Lagrangian problem while balancing exploration and
exploitation. Our algorithm updates the minimax decision primal variables via
online mirror descent and the dual variable via projected gradient step and we
prove that it enjoys sublinear rate for
both regret and constraint violation after playing episodes of the game.
Here, is the horizon of each episode, and are the
state/action space sizes of the min-player and the max-player, respectively. To
the best of our knowledge, we provide the first provably efficient online safe
reinforcement learning algorithm in constrained Markov games.Comment: 59 pages, a full version of the main paper in the 5th Annual
Conference on Learning for Dynamics and Contro
- …