7 research outputs found

    Mixed Integer Linear Programming For Exact Finite-Horizon Planning In Decentralized Pomdps

    Get PDF
    We consider the problem of finding an n-agent joint-policy for the optimal finite-horizon control of a decentralized Pomdp (Dec-Pomdp). This is a problem of very high complexity (NEXP-hard in n >= 2). In this paper, we propose a new mathematical programming approach for the problem. Our approach is based on two ideas: First, we represent each agent's policy in the sequence-form and not in the tree-form, thereby obtaining a very compact representation of the set of joint-policies. Second, using this compact representation, we solve this problem as an instance of combinatorial optimization for which we formulate a mixed integer linear program (MILP). The optimal solution of the MILP directly yields an optimal joint-policy for the Dec-Pomdp. Computational experience shows that formulating and solving the MILP requires significantly less time to solve benchmark Dec-Pomdp problems than existing algorithms. For example, the multi-agent tiger problem for horizon 4 is solved in 72 secs with the MILP whereas existing algorithms require several hours to solve it

    Programmation dynamique à mémoire bornée avec distribution sur les croyances pour les Dec-POMDPs

    Get PDF
    National audienceNous proposons une approche heuristique pour calculer une politique approchée d'un Dec-POMDP. Il s'agit d'une approche par programmation dynamique à base de points dans la lignée des algorithmes PBDP \citep{szer2006a}, MBDP \citep{seuken2007a} et IMBDP \citep{seuken2007b} : Elle formule le choix des politiques retenues à chaque étape de la construction comme un problème d'optimisation. Le critère de ce problème repose sur une estimation de la distribution de probabilité {\em a priori} des croyances atteignables pour un horizon donné : Il s'agit de maximiser l'espérance des récompenses cumulées pour l'horizon considéré étant donné cette distribution. L'estimation de cette espérance peut se faire par échantillonnage des croyances en simulant une politique heuristique

    Strengthening Deterministic Policies for POMDPs

    Full text link
    The synthesis problem for partially observable Markov decision processes (POMDPs) is to compute a policy that satisfies a given specification. Such policies have to take the full execution history of a POMDP into account, rendering the problem undecidable in general. A common approach is to use a limited amount of memory and randomize over potential choices. Yet, this problem is still NP-hard and often computationally intractable in practice. A restricted problem is to use neither history nor randomization, yielding policies that are called stationary and deterministic. Previous approaches to compute such policies employ mixed-integer linear programming (MILP). We provide a novel MILP encoding that supports sophisticated specifications in the form of temporal logic constraints. It is able to handle an arbitrary number of such specifications. Yet, randomization and memory are often mandatory to achieve satisfactory policies. First, we extend our encoding to deliver a restricted class of randomized policies. Second, based on the results of the original MILP, we employ a preprocessing of the POMDP to encompass memory-based decisions. The advantages of our approach over state-of-the-art POMDP solvers lie (1) in the flexibility to strengthen simple deterministic policies without losing computational tractability and (2) in the ability to enforce the provable satisfaction of arbitrarily many specifications. The latter point allows taking trade-offs between performance and safety aspects of typical POMDP examples into account. We show the effectiveness of our method on a broad range of benchmarks
    corecore