473 research outputs found
Diagnosis and Repair for Synthesis from Signal Temporal Logic Specifications
We address the problem of diagnosing and repairing specifications for hybrid
systems formalized in signal temporal logic (STL). Our focus is on the setting
of automatic synthesis of controllers in a model predictive control (MPC)
framework. We build on recent approaches that reduce the controller synthesis
problem to solving one or more mixed integer linear programs (MILPs), where
infeasibility of a MILP usually indicates unrealizability of the controller
synthesis problem. Given an infeasible STL synthesis problem, we present
algorithms that provide feedback on the reasons for unrealizability, and
suggestions for making it realizable. Our algorithms are sound and complete,
i.e., they provide a correct diagnosis, and always terminate with a non-trivial
specification that is feasible using the chosen synthesis method, when such a
solution exists. We demonstrate the effectiveness of our approach on the
synthesis of controllers for various cyber-physical systems, including an
autonomous driving application and an aircraft electric power system
Recommended from our members
Oracle-Guided Design and Analysis of Learning-Based Cyber-Physical Systems
We are in world where autonomous systems, such as self-driving cars, surgical robots, robotic manipulators are becoming a reality. Such systems are considered \textit{safety-critical} since they interact with humans on a regular basis. Hence, before such systems can be integrated into our day to day life, we need to guarantee their safety. Recent success in machine learning (ML) and artificial intelligence (AI) has led to an increase in their use in real world robotic systems. For example, complex perception modules in self-driving cars and deep reinforcement learning controllers in robotic manipulators. Although powerful, they introduce an additional level of complexity when it comes to the formal analysis of autonomous systems. In this thesis, such systems are designated as Learning-Based Cyber-Physical Systems~(LB-CPS). In this thesis, we take inspiration from the Oracle-Guided Inductive Synthesis~(OGIS) paradigm to develop frameworks which can aid in achieving formal guarantees in different stages of an autonomous system design and analysis pipeline. Furthermore, we show that to guarantee the safety of LB-CPS, the design (synthesis) and analysis (verification) must consider feedback from the other. We consider five important parts of the design and analysis process and show a strong coupling among them, namely (i) Robust Control Synthesis from High Level Safety Specifications; (ii) Diagnosis and Repair of Safety Requirements for Control Synthesis; (iii) Counter-example Guided Data Augmentation for training high-accuracy ML models; (iv) Simulation-Guided Falsification and Verification against Adversarial Environments; and (v) Bridging Model and Real-World Gap. Finally, we introduce a software toolkit \verifai{} for the design and analysis of AI based systems, which was developed to provide a common formal platform to implement design and analysis frameworks for LB-CPS
A Review of Formal Methods applied to Machine Learning
We review state-of-the-art formal methods applied to the emerging field of
the verification of machine learning systems. Formal methods can provide
rigorous correctness guarantees on hardware and software systems. Thanks to the
availability of mature tools, their use is well established in the industry,
and in particular to check safety-critical applications as they undergo a
stringent certification process. As machine learning is becoming more popular,
machine-learned components are now considered for inclusion in critical
systems. This raises the question of their safety and their verification. Yet,
established formal methods are limited to classic, i.e. non machine-learned
software. Applying formal methods to verify systems that include machine
learning has only been considered recently and poses novel challenges in
soundness, precision, and scalability.
We first recall established formal methods and their current use in an
exemplar safety-critical field, avionic software, with a focus on abstract
interpretation based techniques as they provide a high level of scalability.
This provides a golden standard and sets high expectations for machine learning
verification. We then provide a comprehensive and detailed review of the formal
methods developed so far for machine learning, highlighting their strengths and
limitations. The large majority of them verify trained neural networks and
employ either SMT, optimization, or abstract interpretation techniques. We also
discuss methods for support vector machines and decision tree ensembles, as
well as methods targeting training and data preparation, which are critical but
often neglected aspects of machine learning. Finally, we offer perspectives for
future research directions towards the formal verification of machine learning
systems
Formal Synthesis of Controllers for Safety-Critical Autonomous Systems: Developments and Challenges
In recent years, formal methods have been extensively used in the design of
autonomous systems. By employing mathematically rigorous techniques, formal
methods can provide fully automated reasoning processes with provable safety
guarantees for complex dynamic systems with intricate interactions between
continuous dynamics and discrete logics. This paper provides a comprehensive
review of formal controller synthesis techniques for safety-critical autonomous
systems. Specifically, we categorize the formal control synthesis problem based
on diverse system models, encompassing deterministic, non-deterministic, and
stochastic, and various formal safety-critical specifications involving logic,
real-time, and real-valued domains. The review covers fundamental formal
control synthesis techniques, including abstraction-based approaches and
abstraction-free methods. We explore the integration of data-driven synthesis
approaches in formal control synthesis. Furthermore, we review formal
techniques tailored for multi-agent systems (MAS), with a specific focus on
various approaches to address the scalability challenges in large-scale
systems. Finally, we discuss some recent trends and highlight research
challenges in this area
Designing Trustworthy Autonomous Systems
The design of autonomous systems is challenging and ensuring their trustworthiness can have different meanings, such as i) ensuring consistency and completeness of the requirements by a correct elicitation and formalization process; ii) ensuring that requirements are correctly mapped to system implementations so that any system behaviors never violate its requirements; iii) maximizing the reuse of available components and subsystems in order to cope with the design complexity; and iv) ensuring correct coordination of the system with its environment.Several techniques have been proposed over the years to cope with specific problems. However, a holistic design framework that, leveraging on existing tools and methodologies, practically helps the analysis and design of autonomous systems is still missing. This thesis explores the problem of building trustworthy autonomous systems from different angles. We have analyzed how current approaches of formal verification can provide assurances: 1) to the requirement corpora itself by formalizing requirements with assume/guarantee contracts to detect incompleteness and conflicts; 2) to the reward function used to then train the system so that the requirements do not get misinterpreted; 3) to the execution of the system by run-time monitoring and enforcing certain invariants; 4) to the coordination of the system with other external entities in a system of system scenario and 5) to system behaviors by automatically synthesize a policy which is correct
Computer Aided Verification
This open access two-volume set LNCS 10980 and 10981 constitutes the refereed proceedings of the 30th International Conference on Computer Aided Verification, CAV 2018, held in Oxford, UK, in July 2018. The 52 full and 13 tool papers presented together with 3 invited papers and 2 tutorials were carefully reviewed and selected from 215 submissions. The papers cover a wide range of topics and techniques, from algorithmic and logical foundations of verification to practical applications in distributed, networked, cyber-physical, and autonomous systems. They are organized in topical sections on model checking, program analysis using polyhedra, synthesis, learning, runtime verification, hybrid and timed systems, tools, probabilistic systems, static analysis, theory and security, SAT, SMT and decisions procedures, concurrency, and CPS, hardware, industrial applications
Computer Aided Verification
This open access two-volume set LNCS 13371 and 13372 constitutes the refereed proceedings of the 34rd International Conference on Computer Aided Verification, CAV 2022, which was held in Haifa, Israel, in August 2022. The 40 full papers presented together with 9 tool papers and 2 case studies were carefully reviewed and selected from 209 submissions. The papers were organized in the following topical sections: Part I: Invited papers; formal methods for probabilistic programs; formal methods for neural networks; software Verification and model checking; hyperproperties and security; formal methods for hardware, cyber-physical, and hybrid systems. Part II: Probabilistic techniques; automata and logic; deductive verification and decision procedures; machine learning; synthesis and concurrency. This is an open access book
Explanation of the Model Checker Verification Results
Immer wenn neue Anforderungen an ein System gestellt werden, mĂŒssen die Korrektheit und Konsistenz der Systemspezifikation ĂŒberprĂŒft werden, was in der Praxis in der Regel manuell erfolgt. Eine mögliche Option, um die Nachteile dieser manuellen Analyse zu ĂŒberwinden, ist das sogenannte Contract-Based Design. Dieser Entwurfsansatz kann den Verifikationsprozess zur ĂberprĂŒfung, ob die Anforderungen auf oberster Ebene konsistent verfeinert wurden, automatisieren. Die Verifikation kann somit iterativ durchgefĂŒhrt werden, um die Korrektheit und Konsistenz des Systems angesichts jeglicher Ănderung der Spezifikationen sicherzustellen.
Allerdings ist es aufgrund der mangelnden Benutzerfreundlichkeit und der Schwierigkeiten bei der Interpretation von Verifizierungsergebnissen immer noch eine Herausforderung, formale AnsĂ€tze in der Industrie einzusetzen. Stellt beispielsweise der Model Checker bei der Verifikation eine Inkonsistenz fest, generiert er ein Gegenbeispiel (Counterexample) und weist gleichzeitig darauf hin, dass die gegebenen Eingabespezifikationen inkonsistent sind. Hier besteht die gewaltige Herausforderung darin, das generierte Gegenbeispiel zu verstehen, das oft sehr lang, kryptisch und komplex ist. DarĂŒber hinaus liegt es in der Verantwortung der Ingenieurin bzw. des Ingenieurs, die inkonsistente Spezifikation in einer potenziell groĂen Menge von Spezifikationen zu identifizieren.
Diese Arbeit schlĂ€gt einen Ansatz zur ErklĂ€rung von Gegenbeispielen (Counterexample Explanation Approach) vor, der die Verwendung von formalen Methoden vereinfacht und fördert, indem benutzerfreundliche ErklĂ€rungen der Verifikationsergebnisse der Ingenieurin bzw. dem Ingenieur prĂ€sentiert werden. Der Ansatz zur ErklĂ€rung von Gegenbeispielen wird mittels zweier Methoden evaluiert: (1) Evaluation anhand verschiedener Anwendungsbeispiele und (2) eine Benutzerstudie in Form eines One-Group Pretest-Posttest Experiments.Whenever new requirements are introduced for a system, the correctness and consistency of the system specification must be verified, which is often done manually in industrial settings. One viable option to traverse disadvantages of this manual analysis is to employ the contract-based design, which can automate the verification process to determine whether the refinements of top-level requirements are consistent. Thus, verification can be performed iteratively to ensure the systemâs correctness and consistency in the face of any change in specifications.
Having said that, it is still challenging to deploy formal approaches in industries due to their lack of usability and their difficulties in interpreting verification results. For instance, if the model checker identifies inconsistency during the verification, it generates a counterexample while also indicating that the given input specifications are inconsistent. Here, the formidable challenge is to comprehend the generated counterexample, which is often lengthy, cryptic, and complex. Furthermore, it is the engineerâs responsibility to identify the inconsistent specification among a potentially huge set of specifications.
This PhD thesis proposes a counterexample explanation approach for formal methods that simplifies and encourages their use by presenting user-friendly explanations of the verification results. The proposed counterexample explanation approach identifies and explains relevant information from the verification result in what seems like a natural language statement. The counterexample explanation approach extracts relevant information by identifying inconsistent specifications from among the set of specifications, as well as erroneous states and variables from the counterexample. The counterexample explanation approach is evaluated using two methods: (1) evaluation with different application examples, and (2) a user-study known as one-group pretest and posttest experiment
- âŠ