189 research outputs found

    Scalable Synthesis and Verification: Towards Reliable Autonomy

    Get PDF
    We have seen the growing deployment of autonomous systems in our daily life, ranging from safety-critical self-driving cars to dialogue agents. While impactful and impressive, these systems do not often come with guarantees and are not rigorously evaluated for failure cases. This is in part due to the limited scalability of tools available for designing correct-by-construction systems, or verifying them posthoc. Another key limitation is the lack of availability of models for the complex environments with which autonomous systems often have to interact with. In the direction of overcoming these above mentioned bottlenecks to designing reliable autonomous systems, this thesis makes contributions along three fronts. First, we develop an approach for parallelized synthesis from linear-time temporal logic Specifications corresponding to the generalized reactivity (1) fragment. We begin by identifying a special case corresponding to singleton liveness goals that allows for a decomposition of the synthesis problem, which facilitates parallelized synthesis. Based on the intuition from this special case, we propose a more generalized approach for parallelized synthesis that relies on identifying equicontrollable states. Second, we consider learning-based approaches to enable verification at scale for complex systems, and for autonomous systems that interact with black-box environments. For the former, we propose a new abstraction refinement procedure based on machine learning to improve the performance of nonlinear constraint solving algorithms on large-scale problems. For the latter, we present a data-driven approach based on chance-constrained optimization that allows for a system to be evaluated for specification conformance without an accurate model of the environment. We demonstrate this approach on several tasks, including a lane-change scenario with real-world driving data. Lastly, we consider the problem of interpreting and verifying learning-based components such as neural networks. We introduce a new method based on Craig's interpolants for computing compact symbolic abstractions of pre-images for neural networks. Our approach relies on iteratively computing approximations that provably overapproximate and underapproximate the pre-images at all layers. Further, building on existing work for training neural networks for verifiability in the classification setting, we propose extensions that allow us to generalize the approach to more general architectures and temporal specifications.</p

    Synthesis Of Distributed Protocols From Scenarios And Specifications

    Get PDF
    Distributed protocols, typically expressed as stateful agents communicating asynchronously over buffered communication channels, are difficult to design correctly. This difficulty has spurred decades of research in the area of automated model-checking algorithms. In turn, practical implementations of model-checking algorithms have enabled protocol developers to prove the correctness of such distributed protocols. However, model-checking techniques are only marginally useful during the actual development of such protocols; typically as a debugging aid once a reasonably complete version of the protocol has already been developed. The actual development process itself is often tedious and requires the designer to reason about complex interactions arising out of concurrency and asynchrony inherent to such protocols. In this dissertation we describe program synthesis techniques which can be applied as an enabling technology to ease the task of developing such protocols. Specifically, the programmer provides a natural, but incomplete description of the protocol in an intuitive representation — such as scenarios or an incomplete protocol. This description specifies the behavior of the protocol in the common cases. The programmer also specifies a set of high-level formal requirements that a correct protocol is expected to satisfy. These requirements can include safety requirements as well as liveness requirements in the form of Linear Temporal Logic (LTL) formulas. We describe techniques to synthesize a correct protocol which is consistent with the common-case behavior specified by the programmer and also satisfies the high-level safety and liveness requirements set forth by the programmer. We also describe techniques for program synthesis in general, which serve to enable the solutions to distributed protocol synthesis that this dissertation explores

    A Property Specification Pattern Catalog for Real-Time System Verification with UPPAAL

    Full text link
    Context: The goal of specification pattern catalogs for real-time requirements is to mask the complexity of specifying such requirements in a timed temporal logic for verification. For this purpose, they provide frontends to express and translate pattern-based natural language requirements to formulae in a suitable logic. However, the widely used real-time model checking tool UPPAAL only supports a restricted subset of those formulae that focus only on basic and non-nested reachability, safety, and liveness properties. This restriction renders many specification patterns inapplicable. As a workaround, timed observer automata need to be constructed manually to express sophisticated requirements envisioned by these patterns. Objective: In this work, we fill these gaps by providing a comprehensive specification pattern catalog for UPPAAL. The catalog supports qualitative and real-time requirements and covers all corresponding patterns of existing catalogs. Method: The catalog we propose is integrated with UPPAAL. It supports the specification of qualitative and real-time requirements using patterns and provides an automated generator that translates these requirements to observer automata and TCTL formulae. The resulting artifacts are used for verifying systems in UPPAAL. Thus, our catalog enables an automated end-to-end verification process for UPPAAL based on property specification patterns and observer automata. Results: We evaluate our catalog on three UPPAAL system models reported in the literature and mostly applied in an industrial setting. As a result, not only the reproducibility of the related UPPAAL models was possible, but also the validation of an automated, seamless, and accurate pattern- and observer-based verification process. Conclusion: The proposed property specification pattern catalog for UPPAAL enables practitioners to specify qualitative and real-time requirements...Comment: Accepted Manuscrip

    Techniques for the formal verification of analog and mixed- signal designs

    Get PDF
    Embedded systems are becoming a core technology in a growing range of electronic devices. Cornerstones of embedded systems are analog and mixed signal (AMS) designs, which are integrated circuits required at the interfaces with the real world environment. The verification of AMS designs is concerned with the assurance of correct functionality, in addition to checking whether an AMS design is robust with respect to different types of inaccuracies like parameter tolerances, nonlinearities, etc. The verification framework described in this thesis is composed of two proposed methodologies each concerned with a class of AMS designs, i.e., continuous-time AMS designs and discrete-time AMS designs. The common idea behind both methodologies is built on top of Bounded Model Checking (BMC) algorithms. In BMC, we search for a counter-example for a property verified against the design model for bounded number of verification steps. If a concrete counter-example is found, then the verification is complete and reports a failure, otherwise, we need to increment the number of steps until property validation is achieved. In general, the verification is not complete because of limitations in time and memory needed for the verification. To alleviate this problem, we observed that under certain conditions and for some classes of specification properties, the verification can be complete if we complement the BMC with other methods such as abstraction and constraint based verification methods. To test and validate the proposed approaches, we developed a prototype implementation in Mathematica and we targeted analog and mixed signal systems, like oscillator circuits, switched capacitor based designs, Delta-Sigma modulators for our initial tests of this approach

    Slicing-based debugging of web applications in rewriting logic

    Full text link
    The pervasiveness of computing on the Internet has led to an explosive growth of Web applications that, together with their ever-increasing complexity, have turned their design and development in a major challenge. Unfortunately, the huge expansion of development and utilization of Web computation has not been paired by the development of methods, models and debugging tools to help the developer diagnose, quickly and easily, potential problems in a Web application. There is an urgent demand of analysis and verification facilities capable to prevent insecure software that could cause unavailability of systems or services, or provide access to private data or internal resources of a given organization. The main goal of this MSc thesis is to improve the debugging of Web applications by embedding novel analysis and verification techniques that rely on the program semantics. As a practical realization of the ideas, we use Web-TLR that is a verification engine for dynamic Web applications based on Rewrite Logic. We extend Web-TLR with a novel functionality that supports effective Web debugging for realistic Web applications involving complex execution traces. This functionality is based on a backward trace slicing technique that is based on dynamic labeling. In order to extend the class of programs covered by the debugging methodology we formalize a generalization of the slicer to Conditional Rewriting Logic theories, greatly simplifying the debugging task by providing a novel and sophisticated form of pattern matching.Frechina Navarro, F. (2011). Slicing-based debugging of web applications in rewriting logic. http://hdl.handle.net/10251/15637Archivo delegad

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 10980 and 10981 constitutes the refereed proceedings of the 30th International Conference on Computer Aided Verification, CAV 2018, held in Oxford, UK, in July 2018. The 52 full and 13 tool papers presented together with 3 invited papers and 2 tutorials were carefully reviewed and selected from 215 submissions. The papers cover a wide range of topics and techniques, from algorithmic and logical foundations of verification to practical applications in distributed, networked, cyber-physical, and autonomous systems. They are organized in topical sections on model checking, program analysis using polyhedra, synthesis, learning, runtime verification, hybrid and timed systems, tools, probabilistic systems, static analysis, theory and security, SAT, SMT and decisions procedures, concurrency, and CPS, hardware, industrial applications

    Code deobfuscation by program synthesis-aided simplification of mixed boolean-arithmetic expressions

    Get PDF
    Treballs Finals de Grau de MatemĂ tiques, Facultat de MatemĂ tiques, Universitat de Barcelona, Any: 2020, Director: RaĂșl Roca CĂĄnovas, Antoni Benseny i Mario Reyes de los Mozos[en] This project studies the theoretical background of Mixed Boolean-Arithmetic (MBA) expressions as well as its practical applicability within the field of code obfuscation, which is a technique used both by malware threats and software protection in order to complicate the process of reverse engineering (parts of) a program. An MBA expression is composed of integer arithmetic operators, e.g. (+,−,∗)(+,-, *) and bitwise operators, e.g. (∧,√,⊕,ÂŹ).(\wedge, \vee, \oplus, \neg). MBA expressions can be leveraged to obfuscate the data-flow of code by iteratively applying rewrite rules and function identities that complicate (obfuscate) the initial expression while preserving its semantic behavior. This possibility is motivated by the fact that the combination of operators from these different fields do not interact well together: we have no rules (distributivity, factorization...) or general theory to deal with this mixing of operators. Current deobfuscation techniques to address simplification of this type of data-flow obfuscation are limited by being strongly tied to syntactic complexity. We explore novel program synthesis approaches for addressing simplification of MBA expressions by reasoning on the semantics of the obfuscated expressions instead of syntax, discussing their applicability as well as their limits. We present our own tool rr 2syntia that integrates Syntia, an open source program synthesis tool, into the reverse engineering framework radare 2 in order to retrieve the semantics of obfuscated code from its Input/Output behavior. Finally, we provide some improvement ideas and potential areas for future work to be done

    Formal verification of the pastry protocol

    Get PDF
    Pastry is a structured P2P algorithm realizing a Distributed Hash Table (DHT) over an underlying virtual ring of nodes. Hash keys are assigned to the numerically closest node, according to their Ids that both keys and nodes share from the same Id space. Nodes join and leave the ring dynamically and it is desired that a lookup request from arbitrary node for a key is routed to the responsible node for that key which then delivers the message as answer. Several implementations of Pastry are available and have been applied in practice, but no attempt has so far been made to formally describe the algorithm or to verify its properties. Since Pastry combines rather complex data structures, asynchronous communication, concurrency, resilience to churn, i.e. spontaneous join and departure of nodes, it makes an interesting target for verification. This thesis formally models and improves Pastry\u27s core algorithms, such that they provide the correct lookup service in the presence of churn and maintain a local data structures to adapt the dynamic updates of neighborhood. This thesis focuses on Join protocol of Pastry and formally defines different statuses (from "dead" to "ready") of a node according to its stage during join. Only "ready" nodes are suppose to have consistent key mapping among each other and are allowed to deliver the answer message. The correctness property is identified by this thesis tobe CorrectDelivery, stating that there is always at most one node that can deliver an answer to a lookup request for a key and this node is the numerically closest "ready" node to that key. This property is non-trivial to preserve in the presence of churn. The specification language TLA+ is used to model different versions of Pastry algorithm starting with CastroPastry, followed by HaeberlenPastry, IdealPastry and finally LuPastry. The TLA+ model checker TLC is employed to validate the models and to search for bugs. Models are simplified for more efficient checking with TLC and consequently mitigating the state explosion problem. Through this thesis, unexpected violations of CorrectDelivery in CastroPastry and HaeberlenPastry are discovered and analyzed. Based on the analysis, Haeberlen-Pastry is improved to a new design of the Pastry protocol IdealPastry, which is first verified using the interactive theorem prover TLAPS for TLA+. IdealPastry assumes that a "ready" node handles one joining node at a time and it assumes that (1) no departure of nodes (2) no concurrent join between two "ready" nodes closed to each other. The last assumption of IdealPastry is removed by its improved version LuPastry. In LuPastry, a "ready" node adds the joining node directly when it receives the join request and does not accepts any further join request until it gets the confirmation from the current joining node that it is "ready". LuPastry is proved to be correct w.r.t. CorrectDelivery under the assumption that no nodes leave the network, which cannot be further relaxed due to possible network separation when particular nodes simultaneously leave the network. The most subtle part of the deductive system verification is the search for an appropriate inductive invariant which implies the required safety property and is inductively preserved by all possible actions. The search is guided by the construction of the proof, where TLC is used to discover unexpected violations of a hypothetical invariant postulated in an earlier stage. The final proof of LuPastry consists of more than 10,000 proof steps, which are interactively checked in time by using TLAPS launching different back-end automated theorem provers. This thesis serves also as a case study giving the evidence of possibility and the methodology of how to formally model, to analyze and to manually conduct a formal proof of complex transition system for its safety property. Using LuPastry as template, a more general framework on verification of DHT can be constructed.Pastry ist ein P2P ( peer-to-peer) Algorithmus, der eine verteilte Hashtabelle (DHT) ĂŒber einem als virtuellen Ring strukturierten Netzwerk realisiert. Knoten-Identifikatoren und Hash-SchlĂŒssel entstammen derselben Menge, und jeder Knoten verwaltet die SchlĂŒssel, die seinem Identifikator am nĂ€chsten liegen. Knoten können sich zur Laufzeit ins Netz einfĂŒgen bzw. es verlassen. Dennoch sollen Anfragen nach einem SchlĂŒssel von beliebigen Knoten immer zu demjenigen Knoten weitergeleitet werden, der den SchlĂŒssel verwaltet, und der die Anfrage dann beantwortet. Pastry wurde mehrfach implementiert und praktisch eingesetzt, aber der Algorithmus wurde bisher noch nie mathematisch prĂ€zise modelliert und auf Korrektheit untersucht. Da bei Pastry komplexe Datenstrukturen, asynchrone Kommunikation in einem verteilten Netzwerk und Robustheit gegen churn, d.h. spontanes EinfĂŒgen oder Verlassen von Knoten zusammenkommen, stellt das Protokoll eine interessante Fallstudie fĂŒr formale Verifikationstechniken dar. In dieser Arbeit werden die Kernalgorithmen von Pastry modelliert, die Anfragen nach SchlĂŒsseln in Gegenwart von churn behandeln und lokale Datenstrukturen verwalten, welche die jeweilige Nachbarschaftsbeziehungen zwischen Knoten zur Laufzeit widerspiegeln. Diese Dissertation behandelt insbesondere das Join-Protokoll von Pastry zum EinfĂŒgen neuer Knoten ins Netz, das jedem Knoten seinen Status (von dead\u27\u27 bis ready\u27\u27) zuweist. Knoten mit Status ready\u27\u27 mĂŒssen untereinander konsistente Modelle der ZustĂ€ndigkeit fĂŒr SchlĂŒssel aufweisen und dĂŒrfen Anfragen nach SchlĂŒsseln beantworten. Als zentrale Korrektheitseigenschaft wird in dieser Arbeit CorrectDelivery untersucht, die ausdrĂŒckt, dass zu jeder Zeit höchstens ein Knoten Anfragen nach einem SchlĂŒssel beantworten darf, und dass es sich dabei um den Knoten mit Status ready\u27\u27 handelt, dessen Identifikator dem SchlĂŒssel numerisch am nĂąchsten liegt. In Gegenwart von churn ist es nicht einfach diese Eigenschaft sicherzustellen. Wir benutzen die Spezifikationssprache TLA+, um verschiedene Versionen des Pastry-Protokolls zu modellieren: zunĂ€chst CastroPastry, gefolgt von HaeberlenPastry und IdealPastry, und schließlich LuPastry. Mit Hilfe des Modelcheckers TLC fĂŒr TLA+ werden verschiedene qualitative Eigenschaften untersucht, um die Modelle zu validieren und Fehler zu finden. DafĂŒr werden die Modelle zum Teil vereinfacht, um das Problem der Zustandsexplosion zu mindern und so die Effizienz des Modelchecking zu verbessern. Diese Arbeit konnte unerwartete AblĂ€ufe von CastroPastry und HaeberlenPastry aufdecken, bei denen die Eigenschaft CorrectDelivery verletzt wird. Auf der Grundlage dieser Analyse und einiger Verbesserungen von HaeberlenPastry wird das Protokoll IdealPastry entwickelt und seine Korrektheit mit Hilfe des interaktiven Theorembeweisers TLAPS fĂŒr TLA+ gezeigt. IdealPastry stellt sicher, dass ein ready\u27\u27 Knoten zu jeder Zeit höchstens einen neuen Knoten ins Netz einfĂŒgt, und es nimmt an, dass (1) kein Knoten je das Netz verlĂ€sst und (2) keine zwei Knoten zwischen benachbarten ready\u27\u27 Knoten eingefĂŒgt werden. Der Algorithmus LuPastry verbessert IdealPastry und beseitigt Annahme (2) von IdealPastry. In dieser Version nimmt ein ready\u27\u27 Knoten den neu einzufĂŒgenden Knoten unmittelbar in seine Nachbarschaft auf und akzeptiert dann solange keinen weiteren neu hinzukommenden Knoten, bis der erste Knoten bestĂ€tigt, dass er Status ready\u27\u27 erreicht hat. LuPastry wird als korrekt bezĂŒglich der Eigenschaft CorrectDelivery nachgewiesen, unter der Annahme, dass keine Knoten das Netz verlassen. Diese Annahme kann im allgemeinen nicht vermieden werden, da der Ring in separate Teilnetze zerfallen könnte, wenn bestimmte Knoten gleichzeitig das Netz verlassen. Die grĂ¶ĂŸte Herausforderung bei deduktiven AnsĂ€tzen zur Systemverifikation ist es, eine geeignete Invariante zu finden, die sowohl die angestrebte Sicherheitseigenschaft impliziert als auch induktiv von allen Systemaktionen erhalten wird. WĂ€hrend der Konstruktion des Korrektheitsbeweises wird TLC benutzt, um unerwartete Gegenbeispiele zu hypothetischen Invarianten zu finden, die zuvor postuliert wurden. Der Beweis des LuPastry-Protokolls besteht aus circa 10000 Beweisschritten, die von TLAPS und seinen integrierten automatischen Theorembeweisern verifiziert werden. Die vorliegende Arbeit dient auch als Fallstudie, welche die Möglichkeit der formalen Modellierung, Analyse und Korrektheitsbeweises von komplexen Transitionssystemen aufzeigt und die dabei notwendigen Einzelschritte und -techniken behandelt. LuPastry kann als Vorlage benutzt werden, um einen allgemeineren Rahmen fĂŒr die Verifikation von DHT-Protokollen zu schaffen
    • 

    corecore