97 research outputs found

    Solving Factored MDPs with Hybrid State and Action Variables

    Full text link
    Efficient representations and solutions for large decision problems with continuous and discrete variables are among the most important challenges faced by the designers of automated decision support systems. In this paper, we describe a novel hybrid factored Markov decision process (MDP) model that allows for a compact representation of these problems, and a new hybrid approximate linear programming (HALP) framework that permits their efficient solutions. The central idea of HALP is to approximate the optimal value function by a linear combination of basis functions and optimize its weights by linear programming. We analyze both theoretical and computational aspects of this approach, and demonstrate its scale-up potential on several hybrid optimization problems

    Max-Plus Matching Pursuit for Deterministic Markov Decision Processes

    Get PDF
    We consider deterministic Markov decision processes (MDPs) and apply max-plus algebra tools to approximate the value iteration algorithm by a smaller-dimensional iteration based on a representation on dictionaries of value functions. The setup naturally leads to novel theoretical results which are simply formulated due to the max-plus algebra structure. For example, when considering a fixed (non adaptive) finite basis, the computational complexity of approximating the optimal value function is not directly related to the number of states, but to notions of covering numbers of the state space. In order to break the curse of dimensionality in factored state-spaces, we consider adaptive basis that can adapt to particular problems leading to an algorithm similar to matching pursuit from signal processing. They currently come with no theoretical guarantees but work empirically well on simple deterministic MDPs derived from low-dimensional continuous control problems. We focus primarily on deterministic MDPs but note that the framework can be applied to all MDPs by considering measure-based formulations

    Cluster-Based Control of Transition-Independent MDPs

    Full text link
    This work studies the ability of a third-party influencer to control the behavior of a multi-agent system. The controller exerts actions with the goal of guiding agents to attain target joint strategies. Under mild assumptions, this can be modeled as a Markov decision problem and solved to find a control policy. This setup is refined by introducing more degrees of freedom to the control; the agents are partitioned into disjoint clusters such that each cluster can receive a unique control. Solving for a cluster-based policy through standard techniques like value iteration or policy iteration, however, takes exponentially more computation time due to the expanded action space. A solution is presented in the Clustered Value Iteration algorithm, which iteratively solves for an optimal control via a round robin approach across the clusters. CVI converges exponentially faster than standard value iteration, and can find policies that closely approximate the MDP's true optimal value. For MDPs with separable reward functions, CVI will converge to the true optimum. While an optimal clustering assignment is difficult to compute, a good clustering assignment for the agents may be found with a greedy splitting algorithm, whose associated values form a monotonic, submodular lower bound to the values of optimal clusters. Finally, these control ideas are demonstrated on simulated examples.Comment: 22 pages, 3 figure

    Effective Approximations for Multi-Robot Coordination in Spatially Distributed Tasks

    Get PDF
    Although multi-robot systems have received substantial research attention in recent years, multi-robot coordination still remains a difficult task. Especially, when dealing with spatially distributed tasks and many robots, central control quickly becomes infeasible due to the exponential explosion in the number of joint actions and states. We propose a general algorithm that allows for distributed control, that overcomes the exponential growth in the number of joint actions by aggregating the effect of other agents in the system into a probabilistic model, called subjective approximations, and then choosing the best response. We show for a multi-robot grid-world how the algorithm can be implemented in the well studied Multiagent Markov Decision Process framework, as a sub-class called spatial task allocation problems (SPATAPs). In this framework, we show how to tackle SPATAPs using online, distributed planning by combining subjective agent approximations with restriction of attention to current tasks in the world. An empirical evaluation shows that the combination of both strategies allows to scale to very large problems, while providing near-optimal solutions

    Effective Approximations for Spatial Task Allocation Problems

    Get PDF
    Although multi-robot systems have received substantial research attention in recent years, multi-robot coordination still remains a difficult task. Especially, when dealing with spatially distributed tasks and many robots, central control quickly becomes infeasible due to the exponential explosion in the number of joint actions and states. We propose a general algorithm that allows for distributed control, that overcomes the exponential growth in the number of joint actions by aggregating the effect of other agents in the system into a probabilistic model, called subjective approximations, and then choosing the best response. We show for a multi-robot grid-world how the algorithm can be implemented in the well studied Multiagent Markov Decision Process framework, as a sub-class called spatial task allocation problems (SPATAPs). In this framework, we show how to tackle SPATAPs using online, distributed planning by combining subjective agent approximations with restriction of attention to current tasks in the world. An empirical evaluation shows that the combination of both strategies allows to scale to very large problems, while providing near-optimal solutions

    Planning in Hybrid Structured Stochastic Domains

    Get PDF
    Efficient representations and solutions for large structured decision problems with continuous and discrete variables are among the important challenges faced by the designers of automated decision support systems. In this work, we describe a novel hybrid factored Markov decision process (MDP) model that allows for a compact representation of these problems, and a hybrid approximate linear programming (HALP) framework that permits their efficient solutions. The central idea of HALP is to approximate the optimal value function of an MDP by a linear combination of basis functions and optimize its weights by linear programming. We study both theoretical and practical aspects of this approach, and demonstrate its scale-up potential on several hybrid optimization problems

    Fast approximate hierarchical solution of MDPs

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 89-91).In this thesis, we present an efficient algorithm for creating and solving hierarchical models of large Markov decision processes (MDPs). As the size of the MDP increases, finding an exact solution becomes intractable, so we expect only to find an approximate solution. We also assume that the hierarchies we create are not necessarily applicable to more than one problem so that we must be able to construct and solve the hierarchical model in less time than it would have taken to simply solve the original, flat model. Our approach works in two stages. We first create the hierarchical MDP by forming clusters of states that can transition easily among themselves. We then solve the hierarchical MDP. We use a quick bottom-up pass based on a deterministic approximation of expected costs to move from one state to another to derive a policy from the top down, which avoids solving low-level MDPs for multiple objectives. The resulting policy may be suboptimal but it is guaranteed to reach a goal state in any problem in which it is reachable under the optimal policy. We have two versions of this algorithm, one for enumerated-state MDPs and one for factored MDPs. We have tested the enumerated-state algorithm on classic problems and shown that it is better than or comparable to current work in the field. Factored MDPs are a way of specifying extremely large MDPs without listing all of the states. Because the problem has a compact representation, we suspect that the solution should, in many cases, also have a compact representation. We have an implementation for factored MDPs and have shown that it can find solutions for large, factored problems.by Jennifer L. Barry.S.M

    Planning under risk and uncertainty

    Get PDF
    This thesis concentrates on the optimization of large-scale management policies under conditions of risk and uncertainty. In paper I, we address the problem of solving large-scale spatial and temporal natural resource management problems. To model these types of problems, the framework of graph-based Markov decision processes (GMDPs) can be used. Two algorithms for computation of high-quality management policies are presented: the first is based on approximate linear programming (ALP) and the second is based on mean-field approximation and approximate policy iteration (MF-API). The applicability and efficiency of the algorithms were demonstrated by their ability to compute near-optimal management policies for two large-scale management problems. It was concluded that the two algorithms compute policies of similar quality. However, the MF-API algorithm should be used when both the policy and the expected value of the computed policy are required, while the ALP algorithm may be preferred when only the policy is required. In paper II, a number of reinforcement learning algorithms are presented that can be used to compute management policies for GMDPs when the transition function can only be simulated because its explicit formulation is unknown. Studies of the efficiency of the algorithms for three management problems led us to conclude that some of these algorithms were able to compute near-optimal management policies. In paper III, we used the GMDP framework to optimize long-term forestry management policies under stochastic wind-damage events. The model was demonstrated by a case study of an estate consisting of 1,200 ha of forest land, divided into 623 stands. We concluded that managing the estate according to the risk of wind damage increased the expected net present value (NPV) of the whole estate only slightly, less than 2%, under different wind-risk assumptions. Most of the stands were managed in the same manner as when the risk of wind damage was not considered. However, the analysis rests on properties of the model that need to be refined before definite conclusions can be drawn

    Max-Plus Matching Pursuit for Deterministic Markov Decision Processes

    Get PDF
    We consider deterministic Markov decision processes (MDPs) and apply max-plus algebra tools to approximate the value iteration algorithm by a smaller-dimensional iteration based on a representation on dictionaries of value functions. The setup naturally leads to novel theoretical results which are simply formulated due to the max-plus algebra structure. For example, when considering a fixed (non adaptive) finite basis, the computational complexity of approximating the optimal value function is not directly related to the number of states, but to notions of covering numbers of the state space. In order to break the curse of dimensionality in factored state-spaces, we consider adaptive basis that can adapt to particular problems leading to an algorithm similar to matching pursuit from signal processing. They currently come with no theoretical guarantees but work empirically well on simple deterministic MDPs derived from low-dimensional continuous control problems. We focus primarily on deterministic MDPs but note that the framework can be applied to all MDPs by considering measure-based formulations
    • …
    corecore