158 research outputs found

    Formal Controller Synthesis for Continuous-Space MDPs via Model-Free Reinforcement Learning

    Full text link
    A novel reinforcement learning scheme to synthesize policies for continuous-space Markov decision processes (MDPs) is proposed. This scheme enables one to apply model-free, off-the-shelf reinforcement learning algorithms for finite MDPs to compute optimal strategies for the corresponding continuous-space MDPs without explicitly constructing the finite-state abstraction. The proposed approach is based on abstracting the system with a finite MDP (without constructing it explicitly) with unknown transition probabilities, synthesizing strategies over the abstract MDP, and then mapping the results back over the concrete continuous-space MDP with approximate optimality guarantees. The properties of interest for the system belong to a fragment of linear temporal logic, known as syntactically co-safe linear temporal logic (scLTL), and the synthesis requirement is to maximize the probability of satisfaction within a given bounded time horizon. A key contribution of the paper is to leverage the classical convergence results for reinforcement learning on finite MDPs and provide control strategies maximizing the probability of satisfaction over unknown, continuous-space MDPs while providing probabilistic closeness guarantees. Automata-based reward functions are often sparse; we present a novel potential-based reward shaping technique to produce dense rewards to speed up learning. The effectiveness of the proposed approach is demonstrated by applying it to three physical benchmarks concerning the regulation of a room's temperature, control of a road traffic cell, and of a 7-dimensional nonlinear model of a BMW 320i car.Comment: This work is accepted at the 11th ACM/IEEE Conference on Cyber-Physical Systems (ICCPS

    Robust Control for Dynamical Systems With Non-Gaussian Noise via Formal Abstractions

    Full text link
    Controllers for dynamical systems that operate in safety-critical settings must account for stochastic disturbances. Such disturbances are often modeled as process noise in a dynamical system, and common assumptions are that the underlying distributions are known and/or Gaussian. In practice, however, these assumptions may be unrealistic and can lead to poor approximations of the true noise distribution. We present a novel controller synthesis method that does not rely on any explicit representation of the noise distributions. In particular, we address the problem of computing a controller that provides probabilistic guarantees on safely reaching a target, while also avoiding unsafe regions of the state space. First, we abstract the continuous control system into a finite-state model that captures noise by probabilistic transitions between discrete states. As a key contribution, we adapt tools from the scenario approach to compute probably approximately correct (PAC) bounds on these transition probabilities, based on a finite number of samples of the noise. We capture these bounds in the transition probability intervals of a so-called interval Markov decision process (iMDP). This iMDP is, with a user-specified confidence probability, robust against uncertainty in the transition probabilities, and the tightness of the probability intervals can be controlled through the number of samples. We use state-of-the-art verification techniques to provide guarantees on the iMDP and compute a controller for which these guarantees carry over to the original control system. In addition, we develop a tailored computational scheme that reduces the complexity of the synthesis of these guarantees on the iMDP. Benchmarks on realistic control systems show the practical applicability of our method, even when the iMDP has hundreds of millions of transitions.Comment: To appear in the Journal of Artificial Intelligence Research (JAIR). arXiv admin note: text overlap with arXiv:2110.1266

    From Small-Gain Theory to Compositional Construction of Barrier Certificates for Large-Scale Stochastic Systems

    Full text link
    This paper is concerned with a compositional approach for the construction of control barrier certificates for large-scale interconnected stochastic systems while synthesizing hybrid controllers against high-level logic properties. Our proposed methodology involves decomposition of interconnected systems into smaller subsystems and leverages the notion of control sub-barrier certificates of subsystems, enabling one to construct control barrier certificates of interconnected systems by employing some max\max-type small-gain conditions. The main goal is to synthesize hybrid controllers enforcing complex logic properties including the ones represented by the accepting language of deterministic finite automata (DFA), while providing probabilistic guarantees on the satisfaction of given specifications in bounded-time horizons. To do so, we propose a systematic approach to first decompose high-level specifications into simple reachability tasks by utilizing automata corresponding to the complement of specifications. We then construct control sub-barrier certificates and synthesize local controllers for those simpler tasks and combine them to obtain a hybrid controller that ensures satisfaction of the complex specification with some lower-bound on the probability of satisfaction. To compute control sub-barrier certificates and corresponding local controllers, we provide two systematic approaches based on sum-of-squares (SOS) optimization program and counter-example guided inductive synthesis (CEGIS) framework. We finally apply our proposed techniques to two physical case studies
    corecore