14 research outputs found

    Linear Programming for Large-Scale Markov Decision Problems

    Get PDF
    We consider the problem of controlling a Markov decision process (MDP) with a large state space, so as to minimize average cost. Since it is intractable to compete with the optimal policy for large scale problems, we pursue the more modest goal of competing with a low-dimensional family of policies. We use the dual linear programming formulation of the MDP average cost problem, in which the variable is a stationary distribution over state-action pairs, and we consider a neighborhood of a low-dimensional subset of the set of stationary distributions (defined in terms of state-action features) as the comparison class. We propose two techniques, one based on stochastic convex optimization, and one based on constraint sampling. In both cases, we give bounds that show that the performance of our algorithms approaches the best achievable by any policy in the comparison class. Most importantly, these results depend on the size of the comparison class, but not on the size of the state space. Preliminary experiments show the effectiveness of the proposed algorithms in a queuing application.Comment: 27 pages, 3 figure

    A Linear Programming Approach to Error Bounds for Random Walks in the Quarter-plane

    Full text link
    We consider the approximation of the performance of random walks in the quarter-plane. The approximation is in terms of a random walk with a product-form stationary distribution, which is obtained by perturbing the transition probabilities along the boundaries of the state space. A Markov reward approach is used to bound the approximation error. The main contribution of the work is the formulation of a linear program that provides the approximation error

    An approximate dynamic programming approach to risk sensitive control of execution costs

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 43-44).We study the problem of optimal execution within a dynamic programming framework. Given an exponential objective function, system variables which are normally distributed, and linear market dynamics, we derive a closed form solution for optimal trading trajectories. We show that a trader lacking private information has trajectories which are static in nature, whilst a trader with private information requires real time observations to execute optimally. We further show that Bellman's equations become increasingly complex to solve if either the market dynamics are nonlinear, or if additional constraints are added to the problem. As such, we propose an approximate dynamic program using linear programming which achieves near-optimality. The algorithm approximates the exponential objective function within a class of linear architectures, and takes advantage of a probabilistic constraint sampling scheme in order to terminate. The performance of the algorithm relies on the quality of the approximation, and as such we propose a set of heuristics for its efficient implementation.by David Jeria.M.Eng

    Approximate Dynamic Programming via a Smoothed Linear Program

    Get PDF
    We present a novel linear program for the approximation of the dynamic programming cost-to-go function in high-dimensional stochastic control problems. LP approaches to approximate DP have typically relied on a natural “projection” of a well-studied linear program for exact dynamic programming. Such programs restrict attention to approximations that are lower bounds to the optimal cost-to-go function. Our program—the “smoothed approximate linear program”—is distinct from such approaches and relaxes the restriction to lower bounding approximations in an appropriate fashion while remaining computationally tractable. Doing so appears to have several advantages: First, we demonstrate bounds on the quality of approximation to the optimal cost-to-go function afforded by our approach. These bounds are, in general, no worse than those available for extant LP approaches and for specific problem instances can be shown to be arbitrarily stronger. Second, experiments with our approach on a pair of challenging problems (the game of Tetris and a queueing network control problem) show that the approach outperforms the existing LP approach (which has previously been shown to be competitive with several ADP algorithms) by a substantial margin
    corecore