430,384 research outputs found

    On the sufficient conditions for input-to-state safety

    Get PDF

    On the sufficient conditions for input-to-state safety

    Get PDF

    On the sufficient conditions for input-to-state safety

    Get PDF
    In this paper, we present a novel notion of input- to-state safety (ISSf) for general nonlinear systems which can be useful for certifying system’s safety under the influence of external bounded input (or disturbance) signals. We provide sufficient conditions for ISSf using barrier function/certificate which are analogous to the input-to-state stability Lyapunov function

    Robust fault estimation for stochastic Takagi-Sugeno fuzzy systems

    Get PDF
    Nowadays, industrial plants are calling for high-performance fault diagnosis techniques to meet stringent requirements on system availability and safety in the event of component failures. This paper deals with robust fault estimation problems for stochastic nonlinear systems subject to faults and unknown inputs relying on Takagi-Sugeno fuzzy models. Augmented approach jointly with unknown input observers for stochastic Takagi-Sugeno models is exploited here, which allows one to estimate both considered faults and full system states robustly. The considered unknown inputs can be either completely decoupled or partially decoupled by observers. For the un-decoupled part of unknown inputs, which still influence error dynamics, stochastic input-to-state stability properties are applied to take nonzero inputs into account and sufficient conditions are achieved to guarantee bounded estimation errors under bounded unknown inputs. Linear matrix inequalities are employed to compute gain matrices of the observer, leading to stochastic input-to-state-stable error dynamics and optimization of the estimation performances against un-decoupled unknown inputs. Finally, simulation on wind turbine benchmark model is applied to validate the performances of the suggested fault reconstruction methodologies

    Control for Safety Specifications of Systems With Imperfect Information on a Partial Order

    Get PDF
    In this paper, we consider the control problem for uncertain systems with imperfect information, in which an output of interest must be kept outside an undesired region (the bad set) in the output space. The state, input, output, and disturbance spaces are equipped with partial orders. The system dynamics are either input/output order preserving with output in R[superscript 2] or given by the parallel composition of input/output order preserving dynamics each with scalar output. We provide necessary and sufficient conditions under which an initial set of possible system states is safe, that is, the corresponding outputs are steerable away from the bad set with open loop controls. A closed loop control strategy is explicitly constructed, which guarantees that the current set of possible system states, as obtained from an estimator, generates outputs that never enter the bad set. The complexity of algorithms that check safety of an initial set of states and implement the control map is quadratic with the dimension of the state space. The algorithms are illustrated on two application examples: a ship maneuver to avoid an obstacle and safe navigation of an helicopter among buildings.National Science Foundation (U.S.) (CAREER Award CNS-0642719

    A Scalable Safety Critical Control Framework for Nonlinear Systems

    Get PDF
    There are two main approaches to safety-critical control. The first one relies on computation of control invariant sets and is presented in the first part of this work. The second approach draws from the topic of optimal control and relies on the ability to realize Model-Predictive-Controllers online to guarantee the safety of a system. In the second approach, safety is ensured at a planning stage by solving the control problem subject for some explicitly defined constraints on the state and control input. Both approaches have distinct advantages but also major drawbacks that hinder their practical effectiveness, namely scalability for the first one and computational complexity for the second. We therefore present an approach that draws from the advantages of both approaches to deliver efficient and scalable methods of ensuring safety for nonlinear dynamical systems. In particular, we show that identifying a backup control law that stabilizes the system is in fact sufficient to exploit some of the set-invariance conditions presented in the first part of this work. Indeed, one only needs to be able to numerically integrate the closed-loop dynamics of the system over a finite horizon under this backup law to compute all the information necessary for evaluating the regulation map and enforcing safety. The effect of relaxing the stabilization requirements of the backup law is also studied, and weaker but more practical safety guarantees are brought forward. We then explore the relationship between the optimality of the backup law and how conservative the resulting safety filter is. Finally, methods of selecting a safe input with varying levels of trade-off between conservatism and computational complexity are proposed and illustrated on multiple robotic systems, namely: a two-wheeled inverted pendulum (Segway), an industrial manipulator, a quadrotor, and a lower body exoskeleton

    Perception-Based Sampled-Data Optimization of Dynamical Systems

    Full text link
    Motivated by perception-based control problems in autonomous systems, this paper addresses the problem of developing feedback controllers to regulate the inputs and the states of a dynamical system to optimal solutions of an optimization problem when one has no access to exact measurements of the system states. In particular, we consider the case where the states need to be estimated from high-dimensional sensory data received only at discrete time intervals. We develop a sampled-data feedback controller that is based on adaptations of a projected gradient descent method, and that includes neural networks as integral components to estimate the state of the system from perceptual information. We derive sufficient conditions to guarantee (local) input-to-state stability of the control loop. Moreover, we show that the interconnected system tracks the solution trajectory of the underlying optimization problem up to an error that depends on the approximation errors of the neural network and on the time-variability of the optimization problem; the latter originates from time-varying safety and performance objectives, input constraints, and unknown disturbances. As a representative application, we illustrate our results with numerical simulations for vision-based autonomous driving.Comment: This is an extended version of the paper accepted to IFAC World Congress 2023 for publication, containing proof

    Data-Driven Assessment of Deep Neural Networks with Random Input Uncertainty

    Full text link
    When using deep neural networks to operate safety-critical systems, assessing the sensitivity of the network outputs when subject to uncertain inputs is of paramount importance. Such assessment is commonly done using reachability analysis or robustness certification. However, certification techniques typically ignore localization information, while reachable set methods can fail to issue robustness guarantees. Furthermore, many advanced methods are either computationally intractable in practice or restricted to very specific models. In this paper, we develop a data-driven optimization-based method capable of simultaneously certifying the safety of network outputs and localizing them. The proposed method provides a unified assessment framework, as it subsumes state-of-the-art reachability analysis and robustness certification. The method applies to deep neural networks of all sizes and structures, and to random input uncertainty with a general distribution. We develop sufficient conditions for the convexity of the underlying optimization, and for the number of data samples to certify and localize the outputs with overwhelming probability. We experimentally demonstrate the efficacy and tractability of the method on a deep ReLU network

    Applied Safety Critical Control

    Get PDF
    There is currently a clear gap between control-theoretical results and the reality of robotic implementation, in the sense that it is very difficult to transfer analytical guarantees to practical ones. This is especially problematic when trying to design safety-critical systems where failure is not an option. While there is a vast body of work on safety and reliability in control theory, very little of it is actually used in practice where safety margins are typically empiric and/or heuristic. Nevertheless, it is still widely accepted that a solution to these problems can only emerge from rigorous analysis, mathematics, and methods. In this work, we therefore seek to help bridge this gap by revisiting and expanding existing theoretical results in light of the complexity of hardware implementation. To that end, we begin by making a clear theoretical distinction between systems and models, and outline how the two need to be related for guarantees to transfer from the latter to the former. We then formalize various imperfections of reality that need to be accounted for at a model level to provide theoretical results with better applicability. We then discuss the reality of digital controller implementation and present the mathematical constraints that theoretical control laws must satisfy for them to be implementable on real hardware. In light of these discussions, we derive new realizable set-invariance conditions that, if properly enforced, can guarantee safety with an arbitrary high levels of confidence. We then discuss how these conditions can be rigorously enforced in a systematic and minimally invasive way through convex optimization-based Safety Filters. Multiple safety filter formulations are proposed with varying levels of complexity and applicability. To enable the use of these safety filters, a new algorithm is presented to compute appropriate control invariant sets and guarantee feasibility of the optimization problem defining these filters. The effectiveness of this approach is demonstrated in simulation on a nonlinear inverted pendulum and experimentally on a simple vehicle. The aptitude of the framework to handle a system's dynamics uncertainty is illustrated by varying the mass of the vehicle and showcasing when safety is conserved. Then, the aptitude of this approach to provide guarantees that account for controller implementation's constraints is illustrated by varying the frequency of the control loop and again showcasing when safety is conserved. In the second part of this work, we revisit the safety filtering approach in a way that addresses the scalability issues of the first part of this work. There are two main approaches to safety-critical control. The first one relies on computation of control invariant sets and was presented in the first part of this work. The second approach draws from the topic of optimal control and relies on the ability to realize Model-Predictive-Controllers online to guarantee the safety of a system. In that online approach, safety is ensured at a planning stage by solving the control problem subject for some explicitly defined constraints on the state and control input. Both approaches have distinct advantages but also major drawbacks that hinder their practical effectiveness, namely scalability for the first one and computational complexity for the second one. We therefore present an approach that draws from the advantages of both approaches to deliver efficient and scalable methods of ensuring safety for nonlinear dynamical systems. In particular, we show that identifying a backup control law that stabilizes the system is in fact sufficient to exploit some of the set-invariance conditions presented in the first part of this work. Indeed, one only needs to be able to numerically integrate the closed-loop dynamics of the system over a finite horizon under this backup law to compute all the information necessary for evaluating the regulation map and enforcing safety. The effect of relaxing the stabilization requirements of the backup law is also studied, and weaker but more practical safety guarantees are brought forward. We then explore the relationship between the optimality of the backup law and how conservative the resulting safety filter is. Finally, methods of selecting a safe input with varying levels of trade-off between conservativeness and computational complexity are proposed and illustrated on multiple robotic systems, namely: a two-wheeled inverted pendulum (Segway), an industrial manipulator, a quadrotor, and a lower body exoskeleton.</p
    • …
    corecore