335 research outputs found

    Linear Quadratic Games: An Overview

    Get PDF
    In this paper we review some basic results on linear quadratic differential games.We consider both the cooperative and non-cooperative case.For the non-cooperative game we consider the open-loop and (linear) feedback information structure.Furthermore the effect of adding uncertainty is considered.The overview is based on [9].Readers interested in detailed proofs and additional results are referred to this book.linear-quadratic games;Nash equilibrium;affine systems;solvability conditions;Riccati equations

    N-person differential games. Part 1: Duality-finite element methods

    Get PDF
    The duality approach, which is motivated by computational needs and is done by introducing N + 1 Language multipliers is addressed. For N-person linear quadratic games, the primal min-max problem is shown to be equivalent to the dual min-max problem

    Fiscal Policy Interaction in the EMU

    Get PDF
    fiscal policy design;EMU;linear quadratic games

    A simple framework for investigating the properties of policy games

    Get PDF
    The paper extensively studies the static model of non-cooperative linear quadratic games in which a set of agents chooses their instruments strategically to minimize their linear quadratic criterion. We first derive the necessary and sufficient conditions for the existence of a Nash equilibrium as well as for multiple equilibria to arise. Furthermore, we study the general condition for policy neutrality and Pareto efficiency of the equilibrium by introducing a new concept of decisiveness.Conflict of interest, Nash equilibrium existence, multiplicity, policy invariance, controllability, Pareto efficiency

    Learning Zero-Sum Linear Quadratic Games with Improved Sample Complexity

    Full text link
    Zero-sum Linear Quadratic (LQ) games are fundamental in optimal control and can be used (i) as a dynamic game formulation for risk-sensitive or robust control, or (ii) as a benchmark setting for multi-agent reinforcement learning with two competing agents in continuous state-control spaces. In contrast to the well-studied single-agent linear quadratic regulator problem, zero-sum LQ games entail solving a challenging nonconvex-nonconcave min-max problem with an objective function that lacks coercivity. Recently, Zhang et al. discovered an implicit regularization property of natural policy gradient methods which is crucial for safety-critical control systems since it preserves the robustness of the controller during learning. Moreover, in the model-free setting where the knowledge of model parameters is not available, Zhang et al. proposed the first polynomial sample complexity algorithm to reach an ϵ\epsilon-neighborhood of the Nash equilibrium while maintaining the desirable implicit regularization property. In this work, we propose a simpler nested Zeroth-Order (ZO) algorithm improving sample complexity by several orders of magnitude. Our main result guarantees a O~(ϵ3)\widetilde{\mathcal{O}}(\epsilon^{-3}) sample complexity under the same assumptions using a single-point ZO estimator. Furthermore, when the estimator is replaced by a two-point estimator, our method enjoys a better O~(ϵ2)\widetilde{\mathcal{O}}(\epsilon^{-2}) sample complexity. Our key improvements rely on a more sample-efficient nested algorithm design and finer control of the ZO natural gradient estimation error

    Moving Horizon Control in Dynamic Games

    Get PDF
    We consider a continuous time system influenced by different agents who adopt moving horizon control. The well known Nash equilibrium concept is used to define two solution concepts fitting in the moving horizon structure. One of them is analyzed in more detail in the class of linear quadratic games. The (dis)advantages of moving horizon control are illustrated by means of a government debt stabilization model.
    corecore