research

Solving Multi-agent MDPs Optimally with Conditional Return Graphs

Abstract

In cooperative multi-agent sequential decision making under uncertainty, agents must coordinate in order find an optimal joint policy that maximises joint value. Typical solution al- gorithms exploit additive structure in the value function, but in the fully-observable multi-agent MDP setting (MMDP) such structure is not present. We propose a new optimal solver for so-called TI-MMDPs, where agents can only af- fect their local state, while their value may depend on the state of others. We decompose the returns into local returns per agent that we represent compactly in a conditional re- turn graph (CRG). Using CRGs the value of a joint policy as well as bounds on the value of partially specified joint policies can be efficiently computed. We propose CoRe, a novel branch-and-bound policy search algorithm building on CRGs. CoRe typically requires less runtime than the avail- able alternatives and is able to find solutions to problems previously considered unsolvable

    Similar works