45 research outputs found

    Strategically-Timed Actions in Stochastic Differential Games

    Get PDF
    Financial systems are rich in interactions amenable to description by stochastic control theory. Optimal stochastic control theory is an elegant mathematical framework in which a controller, profitably alters the dynamics of a stochastic system by exercising costly control inputs. If the system includes more than one agent, the appropriate modelling framework is stochastic differential game theory — a multiplayer generalisation of stochastic control theory. There are numerous environments in which financial agents incur fixed minimal costs when adjusting their investment positions; trading environments with transaction costs and real options pricing are important examples. The presence of fixed minimal adjustment costs produces adjustment stickiness as agents now enact their investment adjustments over a sequence of discrete points. Despite the fundamental relevance of adjustment stickiness within economic theory, in stochastic differential game theory, the set of players’ modifications to the system dynamics is mainly restricted to a continuous class of controls. Under this assumption, players modify their positions through infinitesimally fine adjustments over the problem horizon. This renders such models unsuitable for modelling systems with fixed minimal adjustment costs. To this end, we present a detailed study of strategic interactions with fixed minimal adjustment costs. We perform a comprehensive study of a new stochastic differential game of impulse control and stopping on a jump-diffusion process and, conduct a detailed investigation of two-player impulse control stochastic differential games. We establish the existence of a value of the games and show that the value is a unique (viscosity) solution to a double obstacle problem which is characterised in terms of a solution to a non-linear partial differential equation (PDE). The study is contextualised within two new models of investment that tackle a dynamic duopoly investment problem and an optimal liquidity control and lifetime ruin problem. It is then shown that each optimal investment strategy can be recovered from the equilibrium strategies of the corresponding stochastic differential game. Lastly, we introduce a dynamic principal-agent model with a self-interested agent that faces minimally bounded adjustment costs. For this setting, we show for the first time that the principal can sufficiently distort that agent’s preferences so that the agent finds it optimal to execute policies that maximise the principal’s payoff in the presence of fixed minimal costs

    On the complexity of computing Markov perfect equilibrium in general-sum stochastic games

    Get PDF
    Similar to the role of Markov decision processes in reinforcement learning, Markov games (also called stochastic games) lay down the foundation for the study of multi-agent reinforcement learning and sequential agent interactions. We introduce approximate Markov perfect equilibrium as a solution to the computational problem of finite-state stochastic games repeated in the infinite horizon and prove its PPAD-completeness. This solution concept preserves the Markov perfect property and opens up the possibility for the success of multi-agent reinforcement learning algorithms on static two-player games to be extended to multi-agent dynamic games, expanding the reign of the PPAD-complete class

    Social Contracts for Non-Cooperative Games

    Get PDF
    corecore