10,574 research outputs found

    Risk of Stochastic Systems for Temporal Logic Specifications

    Full text link
    The wide availability of data coupled with the computational advances in artificial intelligence and machine learning promise to enable many future technologies such as autonomous driving. While there has been a variety of successful demonstrations of these technologies, critical system failures have repeatedly been reported. Even if rare, such system failures pose a serious barrier to adoption without a rigorous risk assessment. This paper presents a framework for the systematic and rigorous risk verification of systems. We consider a wide range of system specifications formulated in signal temporal logic (STL) and model the system as a stochastic process, permitting discrete-time and continuous-time stochastic processes. We then define the STL robustness risk as the risk of lacking robustness against failure. This definition is motivated as system failures are often caused by missing robustness to modeling errors, system disturbances, and distribution shifts in the underlying data generating process. Within the definition, we permit general classes of risk measures and focus on tail risk measures such as the value-at-risk and the conditional value-at-risk. While the STL robustness risk is in general hard to compute, we propose the approximate STL robustness risk as a more tractable notion that upper bounds the STL robustness risk. We show how the approximate STL robustness risk can accurately be estimated from system trajectory data. For discrete-time stochastic processes, we show under which conditions the approximate STL robustness risk can even be computed exactly. We illustrate our verification algorithm in the autonomous driving simulator CARLA and show how a least risky controller can be selected among four neural network lane keeping controllers for five meaningful system specifications

    Risk-Sensitive Path Planning via CVaR Barrier Functions: Application to Bipedal Locomotion

    Get PDF
    Enforcing safety of robotic systems in the presence of stochastic uncertainty is a challenging problem. Traditionally,researchers have proposed safety in the statistical mean as a safety measure in this case. However, ensuring safety in the statistical mean is only reasonable if robot safe behavior in the large number of runs is of interest, which precludes the use of mean safety in practical scenarios. In this paper, we propose a risk sensitive notion of safety called conditional-value-at-risk (CVaR) safety, which is concerned with safe performance in the worst case realizations. We introduce CVaR barrier functions asa tool to enforce CVaR safety and propose conditions for their Boolean compositions. Given a legacy controller, we show that we can design a minimally interfering CVaR safe controller via solving difference convex programs. We elucidate the proposed method by applying it to a bipedal locomotion case study

    On Optimizing the Conditional Value-at-Risk of a Maximum Cost for Risk-Averse Safety Analysis

    Full text link
    The popularity of Conditional Value-at-Risk (CVaR), a risk functional from finance, has been growing in the control systems community due to its intuitive interpretation and axiomatic foundation. We consider a non-standard optimal control problem in which the goal is to minimize the CVaR of a maximum random cost subject to a Borel-space Markov decision process. The objective takes the form CVaRα(maxt=0,1,,NCt)\text{CVaR}_{\alpha}(\max_{t=0,1,\dots,N} C_t), where α\alpha is a risk-aversion parameter representing a fraction of worst cases, CtC_t is a stage or terminal cost, and NNN \in \mathbb{N} is the length of a finite discrete-time horizon. The objective represents the maximum departure from a desired operating region averaged over a given fraction α\alpha of worst cases. This problem provides a safety criterion for a stochastic system that is informed by both the probability and severity of the potential consequences of the system's trajectory. In contrast, existing safety analysis frameworks apply stage-wise risk constraints (i.e., ρ(Ct)\rho(C_t) must be small for all tt, where ρ\rho is a risk functional) or assess the probability of constraint violation without quantifying its possible severity. To the best of our knowledge, the problem of interest has not been solved. To solve the problem, we propose and study a family of stochastic dynamic programs on an augmented state space. We prove that the optimal CVaR of a maximum cost enjoys an equivalent representation in terms of the solutions to this family of dynamic programs under appropriate assumptions. We show the existence of an optimal policy that depends on the dynamics of an augmented state under a measurable selection condition. Moreover, we demonstrate how our safety analysis framework is useful for assessing the severity of combined sewer overflows under precipitation uncertainty.Comment: A shorter version is under review for IEEE Transactions on Automatic Control, submitted December 202

    Distributional Probabilistic Model Checking

    Full text link
    Probabilistic model checking can provide formal guarantees on the behavior of stochastic models relating to a wide range of quantitative properties, such as runtime, energy consumption or cost. But decision making is typically with respect to the expected value of these quantities, which can mask important aspects of the full probability distribution such as the possibility of high-risk, low-probability events or multimodalities. We propose a distributional extension of probabilistic model checking, applicable to discrete-time Markov chains (DTMCs) and Markov decision processes (MDPs). We formulate distributional queries, which can reason about a variety of distributional measures, such as variance, value-at-risk or conditional value-at-risk, for the accumulation of reward until a co-safe linear temporal logic formula is satisfied. For DTMCs, we propose a method to compute the full distribution to an arbitrary level of precision, based on a graph analysis and forward analysis of the model. For MDPs, we approximate the optimal policy with respect to expected value or conditional value-at-risk using distributional value iteration. We implement our techniques and investigate their performance and scalability across a range of benchmark models. Experimental results demonstrate that our techniques can be successfully applied to check various distributional properties of large probabilistic models.Comment: 20 pages, 2 pages appendix, 5 figures. Submitted for review. For associated Github repository, see https://github.com/davexparker/prism/tree/ing

    Risk-Averse Planning Under Uncertainty

    Get PDF
    We consider the problem of designing policies for partially observable Markov decision processes (POMDPs) with dynamic coherent risk objectives. Synthesizing risk-averse optimal policies for POMDPs requires infinite memory and thus undecidable. To overcome this difficulty, we propose a method based on bounded policy iteration for designing stochastic but finite state (memory) controllers, which takes advantage of standard convex optimization methods. Given a memory budget and optimality criterion, the proposed method modifies the stochastic finite state controller leading to sub-optimal solutions with lower coherent risk

    Generative Modeling of Residuals for Real-Time Risk-Sensitive Safety with Discrete-Time Control Barrier Functions

    Full text link
    A key source of brittleness for robotic systems is the presence of model uncertainty and external disturbances. Most existing approaches to robust control either seek to bound the worst-case disturbance (which results in conservative behavior), or to learn a deterministic dynamics model (which is unable to capture uncertain dynamics or disturbances). This work proposes a different approach: training a state-conditioned generative model to represent the distribution of error residuals between the nominal dynamics and the actual system. In particular we introduce the Online Risk-Informed Optimization controller (ORIO), which uses Discrete-Time Control Barrier Functions, combined with a learned, generative disturbance model, to ensure the safety of the system up to some level of risk. We demonstrate our approach in both simulations and hardware, and show our method can learn a disturbance model that is accurate enough to enable risk-sensitive control of a quadrotor flying aggressively with an unmodelled slung load. We use a conditional variational autoencoder (CVAE) to learn a state-conditioned dynamics residual distribution, and find that the resulting probabilistic safety controller, which can be run at 100Hz on an embedded computer, exhibits less conservative behavior while retaining theoretical safety properties.Comment: 9 pages, 6 figures, submitted to the 2024 IEEE International Conference on Robotics and Automation (ICRA 2024
    corecore