2 research outputs found

    Parameter-Independent Strategies for pMDPs via POMDPs

    Full text link
    Markov Decision Processes (MDPs) are a popular class of models suitable for solving control decision problems in probabilistic reactive systems. We consider parametric MDPs (pMDPs) that include parameters in some of the transition probabilities to account for stochastic uncertainties of the environment such as noise or input disturbances. We study pMDPs with reachability objectives where the parameter values are unknown and impossible to measure directly during execution, but there is a probability distribution known over the parameter values. We study for the first time computing parameter-independent strategies that are expectation optimal, i.e., optimize the expected reachability probability under the probability distribution over the parameters. We present an encoding of our problem to partially observable MDPs (POMDPs), i.e., a reduction of our problem to computing optimal strategies in POMDPs. We evaluate our method experimentally on several benchmarks: a motivating (repeated) learner model; a series of benchmarks of varying configurations of a robot moving on a grid; and a consensus protocol.Comment: Extended version of a QEST 2018 pape

    Formal methods for motion planning and control in dynamic and partially known environments

    Full text link
    This thesis is motivated by time and safety critical applications involving the use of autonomous vehicles to accomplish complex tasks in dynamic and partially known environments. We use temporal logic to formally express such complex tasks. Temporal logic specifications generalize the classical notions of stability and reachability widely studied within the control and hybrid systems communities. Given a model describing the motion of a robotic system in an environment and a formal task specification, the aim is to automatically synthesize a control policy that guarantees the satisfaction of the specification. This thesis presents novel control synthesis algorithms to tackle the problem of motion planning from temporal logic specifications in uncertain environments. For each one of the planning and control synthesis problems addressed in this dissertation, the proposed algorithms are implemented, evaluated, and validated thought experiments and/or simulations. The first part of this thesis focuses on a mobile robot whose success is measured by the completion of temporal logic tasks within a given period of time. In addition to such time constraints, the planning algorithm must also deal with the uncertainty that arises from the changes in the robot's workspace during task execution. In particular, we consider a robot deployed in a partitioned environment subjected to structural changes such as doors that can open and close. The motion of the robot is modeled as a continuous time Markov decision process and the robot's mission is expressed as a Continuous Stochastic Logic (CSL) formula. A complete framework to find a control strategy that satisfies a specification given as a CSL formula is introduced. The second part of this thesis addresses the synthesis of controllers that guarantee the satisfaction of a task specification expressed as a syntactically co-safe Linear Temporal Logic (scLTL) formula. In this case, uncertainty is characterized by the partial knowledge of the robot's environment. Two scenarios are considered. First, a distributed team of robots required to satisfy the specification over a set of service requests occurring at the vertices of a known graph representing the environment is examined. Second, a single agent motion planning problem from the specification over a set of properties known to be satised at the vertices of the known graph environment is studied. In both cases, we exploit the existence of o-the-shelf model checking and runtime verification tools, the efficiency of graph search algorithms, and the efficacy of exploration techniques to solve the motion planning problem constrained by the absence of complete information about the environment. The final part of this thesis extends uncertainty beyond the absence of a complete knowledge of the environment described above by considering a robot equipped with a noisy sensing system. In particular, the robot is tasked with satisfying a scLTL specification over a set of regions of interest known to be present in the environment. In such a case, although the robot is able to measure the properties characterizing such regions of interest, precisely determining the identity of these regions is not feasible. A mixed observability Markov decision process is used to represent the robot's actuation and sensing models. The control synthesis problem from scLTL formulas is then formulated as a maximum probability reachability problem on this model. The integration of dynamic programming, formal methods, and frontier-based exploration tools allow us to derive an algorithm to solve such a reachability problem
    corecore