728 research outputs found

    Applied Safety Critical Control

    Get PDF
    There is currently a clear gap between control-theoretical results and the reality of robotic implementation, in the sense that it is very difficult to transfer analytical guarantees to practical ones. This is especially problematic when trying to design safety-critical systems where failure is not an option. While there is a vast body of work on safety and reliability in control theory, very little of it is actually used in practice where safety margins are typically empiric and/or heuristic. Nevertheless, it is still widely accepted that a solution to these problems can only emerge from rigorous analysis, mathematics, and methods. In this work, we therefore seek to help bridge this gap by revisiting and expanding existing theoretical results in light of the complexity of hardware implementation. To that end, we begin by making a clear theoretical distinction between systems and models, and outline how the two need to be related for guarantees to transfer from the latter to the former. We then formalize various imperfections of reality that need to be accounted for at a model level to provide theoretical results with better applicability. We then discuss the reality of digital controller implementation and present the mathematical constraints that theoretical control laws must satisfy for them to be implementable on real hardware. In light of these discussions, we derive new realizable set-invariance conditions that, if properly enforced, can guarantee safety with an arbitrary high levels of confidence. We then discuss how these conditions can be rigorously enforced in a systematic and minimally invasive way through convex optimization-based Safety Filters. Multiple safety filter formulations are proposed with varying levels of complexity and applicability. To enable the use of these safety filters, a new algorithm is presented to compute appropriate control invariant sets and guarantee feasibility of the optimization problem defining these filters. The effectiveness of this approach is demonstrated in simulation on a nonlinear inverted pendulum and experimentally on a simple vehicle. The aptitude of the framework to handle a system's dynamics uncertainty is illustrated by varying the mass of the vehicle and showcasing when safety is conserved. Then, the aptitude of this approach to provide guarantees that account for controller implementation's constraints is illustrated by varying the frequency of the control loop and again showcasing when safety is conserved. In the second part of this work, we revisit the safety filtering approach in a way that addresses the scalability issues of the first part of this work. There are two main approaches to safety-critical control. The first one relies on computation of control invariant sets and was presented in the first part of this work. The second approach draws from the topic of optimal control and relies on the ability to realize Model-Predictive-Controllers online to guarantee the safety of a system. In that online approach, safety is ensured at a planning stage by solving the control problem subject for some explicitly defined constraints on the state and control input. Both approaches have distinct advantages but also major drawbacks that hinder their practical effectiveness, namely scalability for the first one and computational complexity for the second one. We therefore present an approach that draws from the advantages of both approaches to deliver efficient and scalable methods of ensuring safety for nonlinear dynamical systems. In particular, we show that identifying a backup control law that stabilizes the system is in fact sufficient to exploit some of the set-invariance conditions presented in the first part of this work. Indeed, one only needs to be able to numerically integrate the closed-loop dynamics of the system over a finite horizon under this backup law to compute all the information necessary for evaluating the regulation map and enforcing safety. The effect of relaxing the stabilization requirements of the backup law is also studied, and weaker but more practical safety guarantees are brought forward. We then explore the relationship between the optimality of the backup law and how conservative the resulting safety filter is. Finally, methods of selecting a safe input with varying levels of trade-off between conservativeness and computational complexity are proposed and illustrated on multiple robotic systems, namely: a two-wheeled inverted pendulum (Segway), an industrial manipulator, a quadrotor, and a lower body exoskeleton.</p

    Robust Adaptive Control Barrier Functions: An Adaptive & Data-Driven Approach to Safety (Extended Version)

    Full text link
    A new framework is developed for control of constrained nonlinear systems with structured parametric uncertainties. Forward invariance of a safe set is achieved through online parameter adaptation and data-driven model estimation. The new adaptive data-driven safety paradigm is merged with a recent adaptive control algorithm for systems nominally contracting in closed-loop. This unification is more general than other safety controllers as closed-loop contraction does not require the system be invertible or in a particular form. Additionally, the approach is less expensive than nonlinear model predictive control as it does not require a full desired trajectory, but rather only a desired terminal state. The approach is illustrated on the pitch dynamics of an aircraft with uncertain nonlinear aerodynamics.Comment: Added aCBF non-Lipschitz example and discussion on approach implementatio

    Safe Multi-Agent Interaction through Robust Control Barrier Functions with Learned Uncertainties

    Get PDF
    Robots operating in real world settings must navigate and maintain safety while interacting with many heterogeneous agents and obstacles. Multi-Agent Control Barrier Functions (CBF) have emerged as a computationally efficient tool to guarantee safety in multi-agent environments, but they assume perfect knowledge of both the robot dynamics and other agents' dynamics. While knowledge of the robot's dynamics might be reasonably well known, the heterogeneity of agents in real-world environments means there will always be considerable uncertainty in our prediction of other agents' dynamics. This work aims to learn high-confidence bounds for these dynamic uncertainties using Matrix-Variate Gaussian Process models, and incorporates them into a robust multi-agent CBF framework. We transform the resulting min-max robust CBF into a quadratic program, which can be efficiently solved in real time. We verify via simulation results that the nominal multi-agent CBF is often violated during agent interactions, whereas our robust formulation maintains safety with a much higher probability and adapts to learned uncertainties

    Control Barrier Functions for Sampled-Data Systems with Input Delays

    Get PDF
    This paper considers the general problem of transitioning theoretically safe controllers to hardware. Concretely, we explore the application of control barrier functions (CBFs) to sampled-data systems: systems that evolve continuously but whose control actions are computed in discrete time-steps. While this model formulation is less commonly used than its continuous counterpart, it more accurately models the reality of most control systems in practice, making the safety guarantees more impactful. In this context, we prove robust set invariance with respect to zero-order hold controllers as well as state uncertainty, without the need to explicitly compute any control invariant sets. It is then shown that this formulation can be exploited to address input delays in this system, with the result being CBF constraints that are affine in the input. The results are demonstrated in a high-fidelity simulation of an unstable Segway robotic system in real-time

    Episodic Learning with Control Lyapunov Functions for Uncertain Robotic Systems

    Get PDF
    Many modern nonlinear control methods aim to endow systems with guaranteed properties, such as stability or safety, and have been successfully applied to the domain of robotics. However, model uncertainty remains a persistent challenge, weakening theoretical guarantees and causing implementation failures on physical systems. This paper develops a machine learning framework centered around Control Lyapunov Functions (CLFs) to adapt to parametric uncertainty and unmodeled dynamics in general robotic systems. Our proposed method proceeds by iteratively updating estimates of Lyapunov function derivatives and improving controllers, ultimately yielding a stabilizing quadratic program model-based controller. We validate our approach on a planar Segway simulation, demonstrating substantial performance improvements by iteratively refining on a base model-free controller

    A Control Barrier Perspective on Episodic Learning via Projection-to-State Safety

    Get PDF
    In this paper we seek to quantify the ability of learning to improve safety guarantees endowed by Control Barrier Functions (CBFs). In particular, we investigate how model uncertainty in the time derivative of a CBF can be reduced via learning, and how this leads to stronger statements on the safe behavior of a system. To this end, we build upon the idea of Input-to-State Safety (ISSf) to define Projection-to-State Safety (PSSf), which characterizes degradation in safety in terms of a projected disturbance. This enables the direct quantification of both how learning can improve safety guarantees, and how bounds on learning error translate to bounds on degradation in safety. We demonstrate that a practical episodic learning approach can use PSSf to reduce uncertainty and improve safety guarantees in simulation and experimentally.Comment: 6 pages, 2 figures, submitted to L-CSS + CDC 202

    A Control Barrier Perspective on Episodic Learning via Projection-to-State Safety

    Get PDF
    In this letter we seek to quantify the ability of learning to improve safety guarantees endowed by Control Barrier Functions (CBFs). In particular, we investigate how model uncertainty in the time derivative of a CBF can be reduced via learning, and how this leads to stronger statements on the safe behavior of a system. To this end, we build upon the idea of Input-to-State Safety (ISSf) to define Projection-to-State Safety (PSSf), which characterizes degradation in safety in terms of a projected disturbance. This enables the direct quantification of both how learning can improve safety guarantees, and how bounds on learning error translate to bounds on degradation in safety. We demonstrate that a practical episodic learning approach can use PSSf to reduce uncertainty and improve safety guarantees in simulation and experimentally
    • …
    corecore