13 research outputs found

    Characterizing Safety: Minimal Control Barrier Functions from Scalar Comparison Systems

    Get PDF
    Verifying set invariance has classical solutions stemming from the seminal work by Nagumo, and defining sets via a smooth barrier function constraint inequality results in computable flow conditions for guaranteeing set invariance. While a majority of these historic results on set invariance consider flow conditions on the boundary, this letter fully characterizes set invariance through minimal barrier functions by directly appealing to a comparison result to define a flow condition over the entire domain of the system. A considerable benefit of this approach is the removal of regularity assumptions of the barrier function. This letter also outlines necessary and sufficient conditions for a valid differential inequality condition, giving the minimum conditions for this type of approach. We also show when minimal barrier functions are necessary and sufficient for set invariance

    Control Barrier Function Based Quadratic Programs for Safety Critical Systems

    Get PDF
    Safety critical systems involve the tight coupling between potentially conflicting control objectives and safety constraints. As a means of creating a formal framework for controlling systems of this form, and with a view toward automotive applications, this paper develops a methodology that allows safety conditions -- expressed as control barrier functions -- to be unified with performance objectives -- expressed as control Lyapunov functions -- in the context of real-time optimization-based controllers. Safety conditions are specified in terms of forward invariance of a set, and are verified via two novel generalizations of barrier functions; in each case, the existence of a barrier function satisfying Lyapunov-like conditions implies forward invariance of the set, and the relationship between these two classes of barrier functions is characterized. In addition, each of these formulations yields a notion of control barrier function (CBF), providing inequality constraints in the control input that, when satisfied, again imply forward invariance of the set. Through these constructions, CBFs can naturally be unified with control Lyapunov functions (CLFs) in the context of a quadratic program (QP); this allows for the achievement of control objectives (represented by CLFs) subject to conditions on the admissible states of the system (represented by CBFs). The mediation of safety and performance through a QP is demonstrated on adaptive cruise control and lane keeping, two automotive control problems that present both safety and performance considerations coupled with actuator bounds

    Safety verification of nonlinear hybrid systems based on invariant clusters

    Get PDF
    In this paper, we propose an approach to automatically compute invariant clusters for nonlinear semialgebraic hybrid systems. An invariant cluster for an ordinary differential equation (ODE) is a multivariate polynomial invariant g(u→, x→) = 0, parametric in u→, which can yield an infinite number of concrete invariants by assigning different values to u→ so that every trajectory of the system can be overapproximated precisely by the intersection of a group of concrete invariants. For semialgebraic systems, which involve ODEs with multivariate polynomial right-hand sides, given a template multivariate polynomial g(u→, x→), an invariant cluster can be obtained by first computing the remainder of the Lie derivative of g(u→, x→) divided by g(u→, x→) and then solving the system of polynomial equations obtained from the coefficients of the remainder. Based on invariant clusters and sum-of-squares (SOS) programming, we present a new method for the safety verification of hybrid systems. Experiments on nonlinear benchmark systems from biology and control theory show that our approach is efficient

    An Inductive Synthesis Framework for Verifiable Reinforcement Learning

    Full text link
    Despite the tremendous advances that have been made in the last decade on developing useful machine-learning applications, their wider adoption has been hindered by the lack of strong assurance guarantees that can be made about their behavior. In this paper, we consider how formal verification techniques developed for traditional software systems can be repurposed for verification of reinforcement learning-enabled ones, a particularly important class of machine learning systems. Rather than enforcing safety by examining and altering the structure of a complex neural network implementation, our technique uses blackbox methods to synthesizes deterministic programs, simpler, more interpretable, approximations of the network that can nonetheless guarantee desired safety properties are preserved, even when the network is deployed in unanticipated or previously unobserved environments. Our methodology frames the problem of neural network verification in terms of a counterexample and syntax-guided inductive synthesis procedure over these programs. The synthesis procedure searches for both a deterministic program and an inductive invariant over an infinite state transition system that represents a specification of an application's control logic. Additional specifications defining environment-based constraints can also be provided to further refine the search space. Synthesized programs deployed in conjunction with a neural network implementation dynamically enforce safety conditions by monitoring and preventing potentially unsafe actions proposed by neural policies. Experimental results over a wide range of cyber-physical applications demonstrate that software-inspired formal verification techniques can be used to realize trustworthy reinforcement learning systems with low overhead.Comment: Published on PLDI 201
    corecore