303 research outputs found

    Computational Methods for Risk-Averse Undiscounted Transient Markov Models

    Get PDF
    Cataloged from PDF version of article.The total cost problem for discrete-time controlled transient Markov models is considered. The objective functional is a Markov dynamic risk measure of the total cost. Two solution methods, value and policy iteration, are proposed, and their convergence is analyzed. In the policy iteration method, we propose two algorithms for policy evaluation: the nonsmooth Newton method and convex programming, and we prove their convergence. The results are illustrated on a credit limit control problem

    Approximating the Termination Value of One-Counter MDPs and Stochastic Games

    Get PDF
    One-counter MDPs (OC-MDPs) and one-counter simple stochastic games (OC-SSGs) are 1-player, and 2-player turn-based zero-sum, stochastic games played on the transition graph of classic one-counter automata (equivalently, pushdown automata with a 1-letter stack alphabet). A key objective for the analysis and verification of these games is the termination objective, where the players aim to maximize (minimize, respectively) the probability of hitting counter value 0, starting at a given control state and given counter value. Recently, we studied qualitative decision problems ("is the optimal termination value = 1?") for OC-MDPs (and OC-SSGs) and showed them to be decidable in P-time (in NP and coNP, respectively). However, quantitative decision and approximation problems ("is the optimal termination value ? p", or "approximate the termination value within epsilon") are far more challenging. This is so in part because optimal strategies may not exist, and because even when they do exist they can have a highly non-trivial structure. It thus remained open even whether any of these quantitative termination problems are computable. In this paper we show that all quantitative approximation problems for the termination value for OC-MDPs and OC-SSGs are computable. Specifically, given a OC-SSG, and given epsilon > 0, we can compute a value v that approximates the value of the OC-SSG termination game within additive error epsilon, and furthermore we can compute epsilon-optimal strategies for both players in the game. A key ingredient in our proofs is a subtle martingale, derived from solving certain LPs that we can associate with a maximizing OC-MDP. An application of Azuma's inequality on these martingales yields a computable bound for the "wealth" at which a "rich person's strategy" becomes epsilon-optimal for OC-MDPs.Comment: 35 pages, 1 figure, full version of a paper presented at ICALP 2011, invited for submission to Information and Computatio

    Belief-space Planning for Active Visual SLAM in Underwater Environments.

    Full text link
    Autonomous mobile robots operating in a priori unknown environments must be able to integrate path planning with simultaneous localization and mapping (SLAM) in order to perform tasks like exploration, search and rescue, inspection, reconnaissance, target-tracking, and others. This level of autonomy is especially difficult in underwater environments, where GPS is unavailable, communication is limited, and environment features may be sparsely- distributed. In these situations, the path taken by the robot can drastically affect the performance of SLAM, so the robot must plan and act intelligently and efficiently to ensure successful task completion. This document proposes novel research in belief-space planning for active visual SLAM in underwater environments. Our motivating application is ship hull inspection with an autonomous underwater robot. We design a Gaussian belief-space planning formulation that accounts for the randomness of the loop-closure measurements in visual SLAM and serves as the mathematical foundation for the research in this thesis. Combining this planning formulation with sampling-based techniques, we efficiently search for loop-closure actions throughout the environment and present a two-step approach for selecting revisit actions that results in an opportunistic active SLAM framework. The proposed active SLAM method is tested in hybrid simulations and real-world field trials of an underwater robot performing inspections of a physical modeling basin and a U.S. Coast Guard cutter. To reduce computational load, we present research into efficient planning by compressing the representation and examining the structure of the underlying SLAM system. We propose the use of graph sparsification methods online to reduce complexity by planning with an approximate distribution that represents the original, full pose graph. We also propose the use of the Bayes tree data structure—first introduced for fast inference in SLAM—to perform efficient incremental updates when evaluating candidate plans that are similar. As a final contribution, we design risk-averse objective functions that account for the randomness within our planning formulation. We show that this aversion to uncertainty in the posterior belief leads to desirable and intuitive behavior within active SLAM.PhDMechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133303/1/schaves_1.pd

    Solving MDPs with thresholded lexicographic ordering using reinforcement learning

    Get PDF
    Includes bibliographical references.2022 Fall.Multiobjective problems with a strict importance order over the objectives occur in many real-life scenarios. While Reinforcement Learning (RL) is a promising approach with a great potential to solve many real-life problems, the RL literature focuses primarily on single-objective tasks, and approaches that can directly address multiobjective with importance order have been scarce. The few proposed approach were noted to be heuristics without theoretical guarantees. However, we found that their practical applicability is very limited as they fail to find a good solution even in very common scenarios. In this work, we first investigate these shortcomings of the existing approaches and propose some solutions that could improve their practical performance. Finally, we propose a completely different approach based on policy optimization using our Lexicographic Projection Optimization (LPO) algorithm and show its performance on some benchmark problems
    • …
    corecore