13 research outputs found

    Neural Predictive Monitoring

    Get PDF
    Neural State Classification (NSC) is a recently proposed method for runtime predictive monitoring of Hybrid Automata (HA) using deep neural networks (DNNs). NSC trains a DNN as an approximate reachability predictor that labels a given HA state x as positive if an unsafe state is reachable from x within a given time bound, and labels x as negative otherwise. NSC predictors have very high accuracy, yet are prone to prediction errors that can negatively impact reliability. To overcome this limitation, we present Neural Predictive Monitoring (NPM), a technique based on NSC and conformal prediction that complements NSC predictions with statistically sound estimates of uncertainty. This yields principled criteria for the rejection of predictions likely to be incorrect, without knowing the true reachability values. We also present an active learning method that significantly reduces both the NSC predictor\u2019s error rate and the percentage of rejected predictions. Our approach is highly efficient, with computation times on the order of milliseconds, and effective, managing in our experimental evaluation to successfully reject almost all incorrect predictions

    Neural Predictive Monitoring for Collective Adaptive Systems

    Get PDF
    Reliable bike-sharing systems can lead to numerous environmental, economic and social benefits and therefore play a central role in the effective development of smart cities. Bike-sharing models deal with spatially distributed stations and interact with an unpredictable environment, the users. Monitoring the trustworthiness of such a collective system is of paramount importance to ensure a good quality of the delivered service, but this task can become computationally demanding due to the complexity of the model under study. Neural Predictive Monitoring (NPM) [5], a neural-network learning-based approach to predictive monitoring (PM) with statistical guarantees, can be employed to preemptively detect violations of a specific requirement – e.g. a station has no more bikes available or a station is full. The computational efficiency of NPM makes PM applicable at runtime even on embedded devices with limited computational power. The goal of this paper is to demonstrate the applicability of NPM on collective adaptive systems such as bike-sharing systems. In particular, we first analyze the performance of NPM over a collective system evolving deterministically. Then, following [7], we tackle a more realistic scenario, where sensors allow only for partial observability and where the system evolves in a stochastic fashion. We evaluate the approach on multiple bike-sharing network topologies, obtaining highly accurate predictions and effective error detection rules

    Conformal Prediction for STL Runtime Verification

    Full text link
    We are interested in predicting failures of cyber-physical systems during their operation. Particularly, we consider stochastic systems and signal temporal logic specifications, and we want to calculate the probability that the current system trajectory violates the specification. The paper presents two predictive runtime verification algorithms that predict future system states from the current observed system trajectory. As these predictions may not be accurate, we construct prediction regions that quantify prediction uncertainty by using conformal prediction, a statistical tool for uncertainty quantification. Our first algorithm directly constructs a prediction region for the satisfaction measure of the specification so that we can predict specification violations with a desired confidence. The second algorithm constructs prediction regions for future system states first, and uses these to obtain a prediction region for the satisfaction measure. To the best of our knowledge, these are the first formal guarantees for a predictive runtime verification algorithm that applies to widely used trajectory predictors such as RNNs and LSTMs, while being computationally simple and making no assumptions on the underlying distribution. We present numerical experiments of an F-16 aircraft and a self-driving car

    Conformal Quantitative Predictive Monitoring of STL Requirements for Stochastic Processes

    Get PDF
    We consider the problem of predictive monitoring (PM), i.e., predicting at runtime the satisfaction of a desired property from the current system's state. Due to its relevance for runtime safety assurance and online control, PM methods need to be efficient to enable timely interventions against predicted violations, while providing correctness guarantees. We introduce \textit{quantitative predictive monitoring (QPM)}, the first PM method to support stochastic processes and rich specifications given in Signal Temporal Logic (STL). Unlike most of the existing PM techniques that predict whether or not some property Ï•\phi is satisfied, QPM provides a quantitative measure of satisfaction by predicting the quantitative (aka robust) STL semantics of Ï•\phi. QPM derives prediction intervals that are highly efficient to compute and with probabilistic guarantees, in that the intervals cover with arbitrary probability the STL robustness values relative to the stochastic evolution of the system. To do so, we take a machine-learning approach and leverage recent advances in conformal inference for quantile regression, thereby avoiding expensive Monte-Carlo simulations at runtime to estimate the intervals. We also show how our monitors can be combined in a compositional manner to handle composite formulas, without retraining the predictors nor sacrificing the guarantees. We demonstrate the effectiveness and scalability of QPM over a benchmark of four discrete-time stochastic processes with varying degrees of complexity

    Deep Learning for Abstraction, Control and Monitoring of Complex Cyber-Physical Systems

    Get PDF
    Cyber-Physical Systems (CPS) consist of digital devices that interact with some physical components. Their popularity and complexity are growing exponentially, giving birth to new, previously unexplored, safety-critical application domains. As CPS permeate our daily lives, it becomes imperative to reason about their reliability. Formal methods provide rigorous techniques for verification, control and synthesis of safe and reliable CPS. However, these methods do not scale with the complexity of the system, thus their applicability to real-world problems is limited. A promising strategy is to leverage deep learning techniques to tackle the scalability issue of formal methods, transforming unfeasible problems into approximately solvable ones. The approximate models are trained over observations which are solutions of the formal problem. In this thesis, we focus on the following tasks, which are computationally challenging: the modeling and the simulation of a complex stochastic model, the design of a safe and robust control policy for a system acting in a highly uncertain environment and the runtime verification problem under full or partial observability. Our approaches, based on deep learning, are indeed applicable to real-world complex and safety-critical systems acting under strict real-time constraints and in presence of a significant amount of uncertainty.Cyber-Physical Systems (CPS) consist of digital devices that interact with some physical components. Their popularity and complexity are growing exponentially, giving birth to new, previously unexplored, safety-critical application domains. As CPS permeate our daily lives, it becomes imperative to reason about their reliability. Formal methods provide rigorous techniques for verification, control and synthesis of safe and reliable CPS. However, these methods do not scale with the complexity of the system, thus their applicability to real-world problems is limited. A promising strategy is to leverage deep learning techniques to tackle the scalability issue of formal methods, transforming unfeasible problems into approximately solvable ones. The approximate models are trained over observations which are solutions of the formal problem. In this thesis, we focus on the following tasks, which are computationally challenging: the modeling and the simulation of a complex stochastic model, the design of a safe and robust control policy for a system acting in a highly uncertain environment and the runtime verification problem under full or partial observability. Our approaches, based on deep learning, are indeed applicable to real-world complex and safety-critical systems acting under strict real-time constraints and in presence of a significant amount of uncertainty

    Conformal Predictive Safety Filter for RL Controllers in Dynamic Environments

    Full text link
    The interest in using reinforcement learning (RL) controllers in safety-critical applications such as robot navigation around pedestrians motivates the development of additional safety mechanisms. Running RL-enabled systems among uncertain dynamic agents may result in high counts of collisions and failures to reach the goal. The system could be safer if the pre-trained RL policy was uncertainty-informed. For that reason, we propose conformal predictive safety filters that: 1) predict the other agents' trajectories, 2) use statistical techniques to provide uncertainty intervals around these predictions, and 3) learn an additional safety filter that closely follows the RL controller but avoids the uncertainty intervals. We use conformal prediction to learn uncertainty-informed predictive safety filters, which make no assumptions about the agents' distribution. The framework is modular and outperforms the existing controllers in simulation. We demonstrate our approach with multiple experiments in a collision avoidance gym environment and show that our approach minimizes the number of collisions without making overly-conservative predictions

    Conservative Safety Monitors of Stochastic Dynamical Systems

    Full text link
    Generating accurate runtime safety estimates for autonomous systems is vital to ensuring their continued proliferation. However, exhaustive reasoning about future behaviors is generally too complex to do at runtime. To provide scalable and formal safety estimates, we propose a method for leveraging design-time model checking results at runtime. Specifically, we model the system as a probabilistic automaton (PA) and compute bounded-time reachability probabilities over the states of the PA at design time. At runtime, we combine distributions of state estimates with the model checking results to produce a bounded time safety estimate. We argue that our approach produces well-calibrated safety probabilities, assuming the estimated state distributions are well-calibrated. We evaluate our approach on simulated water tanks
    corecore