149 research outputs found

    Global guidance for local generalization in model checking

    Get PDF
    SMT-based model checkers, especially IC3-style ones, are currently the most effective techniques for verification of infinite state systems. They infer global inductive invariants via local reasoning about a single step of the transition relation of a system, while employing SMT-based procedures, such as interpolation, to mitigate the limitations of local reasoning and allow for better generalization. Unfortunately, these mitigations intertwine model checking with heuristics of the underlying SMT-solver, negatively affecting stability of model checking. In this paper, we propose to tackle the limitations of locality in a systematic manner. We introduce explicit global guidance into the local reasoning performed by IC3-style algorithms. To this end, we extend the SMT-IC3 paradigm with three novel rules, designed to mitigate fundamental sources of failure that stem from locality. We instantiate these rules for Linear Integer Arithmetic and Linear Rational Aritmetic and implement them on top of Spacer solver in Z3. Our empirical results show that GSpacer, Spacer extended with global guidance, is significantly more effective than both Spacer and sole global reasoning, and, furthermore, is insensitive to interpolation

    Linear-Time Temporal Logic with Team Semantics: Expressivity and Complexity

    Get PDF
    We study the expressivity and complexity of model checking of linear temporal logic with team semantics (TeamLTL). TeamLTL, despite being a purely modal logic, is capable of defining hyperproperties, i.e., properties which relate multiple execution traces. TeamLTL has been introduced quite recently and only few results are known regarding its expressivity and its model checking problem. We relate the expressivity of TeamLTL to logics for hyperproperties obtained by extending LTL with trace and propositional quantifiers (HyperLTL and HyperQPTL). By doing so, we obtain a number of model checking results for TeamLTL and identify its undecidability frontier. In particular, we show decidability of model checking of the so-called left-flat fragment of any downward closed TeamLTL-extension. Moreover, we establish that the model checking problem of TeamLTL with Boolean disjunction and inclusion atoms is undecidable

    Logical and deep learning methods for temporal reasoning

    Get PDF
    In this thesis, we study logical and deep learning methods for the temporal reasoning of reactive systems. In Part I, we determine decidability borders for the satisfiability and realizability problem of temporal hyperproperties. Temporal hyperproperties relate multiple computation traces to each other and are expressed in a temporal hyperlogic. In particular, we identify decidable fragments of the highly expressive hyperlogics HyperQPTL and HyperCTL*. As an application, we elaborate on an enforcement mechanism for temporal hyperproperties. We study explicit enforcement algorithms for specifications given as formulas in universally quantified HyperLTL. In Part II, we train a (deep) neural network on the trace generation and realizability problem of linear-time temporal logic (LTL). We consider a method to generate large amounts of additional training data from practical specification patterns. The training data is generated with classical solvers, which provide one of many possible solutions to each formula. We demonstrate that it is sufficient to train on those particular solutions such that the neural network generalizes to the semantics of the logic. The neural network can predict solutions even for formulas from benchmarks from the literature on which the classical solver timed out. Additionally, we show that it solves a significant portion of problems from the annual synthesis competition (SYNTCOMP) and even out-of-distribution examples from a recent case study.Diese Arbeit befasst sich mit logischen Methoden und mehrschichtigen Lernmethoden für das zeitabhängige Argumentieren über reaktive Systeme. In Teil I werden die Grenzen der Entscheidbarkeit des Erfüllbarkeits- und des Realisierbarkeitsproblem von temporalen Hypereigenschaften bestimmt. Temporale Hypereigenschaften setzen mehrere Berechnungsspuren zueinander in Beziehung und werden in einer temporalen Hyperlogik ausgedrückt. Insbesondere werden entscheidbare Fragmente der hochexpressiven Hyperlogiken HyperQPTL und HyperCTL* identifiziert. Als Anwendung wird ein Enforcement-Mechanismus für temporale Hypereigenschaften erarbeitet. Explizite Enforcement-Algorithmen für Spezifikationen, die als Formeln in universell quantifiziertem HyperLTL angegeben werden, werden untersucht. In Teil II wird ein (mehrschichtiges) neuronales Netz auf den Problemen der Spurgenerierung und Realisierbarkeit von Linear-zeit Temporallogik (LTL) trainiert. Es wird eine Methode betrachtet, um aus praktischen Spezifikationsmustern große Mengen zusätzlicher Trainingsdaten zu generieren. Die Trainingsdaten werden mit klassischen Solvern generiert, die zu jeder Formel nur eine von vielen möglichen Lösungen liefern. Es wird gezeigt, dass es ausreichend ist, an diesen speziellen Lösungen zu trainieren, sodass das neuronale Netz zur Semantik der Logik generalisiert. Das neuronale Netz kann Lösungen sogar für Formeln aus Benchmarks aus der Literatur vorhersagen, bei denen der klassische Solver eine Zeitüberschreitung hatte. Zusätzlich wird gezeigt, dass das neuronale Netz einen erheblichen Teil der Probleme aus dem jährlichen Synthesewettbewerb (SYNTCOMP) und sogar Beispiele außerhalb der Distribution aus einer aktuellen Fallstudie lösen kann

    A Unified Framework for Probabilistic Verification of AI Systems via Weighted Model Integration

    Full text link
    The probabilistic formal verification (PFV) of AI systems is in its infancy. So far, approaches have been limited to ad-hoc algorithms for specific classes of models and/or properties. We propose a unifying framework for the PFV of AI systems based onWeighted Model Integration (WMI), which allows to frame the problem in very general terms. Crucially, this reduction enables the verification of many properties of interest, like fairness, robustness or monotonicity, over a wide range of machine learning models, without making strong distributional assumptions. We support the generality of the approach by solving multiple verification tasks with a single, off-the-shelf WMI solver, then discuss the scalability challenges and research directions related to this promising framework

    Leveraging Static Analysis: An IDE for RTLola

    Full text link
    Runtime monitoring is an essential part of guaranteeing the safety of cyber-physical systems. Recently, runtime monitoring frameworks based on formal specification languages gained momentum. These languages provide valuable abstractions for specifying the behavior of a system. Yet, writing specifications remains challenging as, among other things, the specifier has to keep track of the timing behavior of streams. This paper presents the RTLola Playground, a browser-based development environment for the stream-based runtime monitoring framework RTLola. It features new methods to explore the static analysis results of RTLola, leveraging the advantages of such a formal language to support the developer in writing and understanding specifications. Specifications are executed locally in the browser, plotting the resulting stream values, allowing for intuitive testing. Step-wise execution based on user-provided system traces enables the debugging of identified errors

    Causality-based Neural Network Repair

    Full text link
    Neural networks have had discernible achievements in a wide range of applications. The wide-spread adoption also raises the concern of their dependability and reliability. Similar to traditional decision-making programs, neural networks can have defects that need to be repaired. The defects may cause unsafe behaviors, raise security concerns or unjust societal impacts. In this work, we address the problem of repairing a neural network for desirable properties such as fairness and the absence of backdoor. The goal is to construct a neural network that satisfies the property by (minimally) adjusting the given neural network's parameters (i.e., weights). Specifically, we propose CARE (\textbf{CA}usality-based \textbf{RE}pair), a causality-based neural network repair technique that 1) performs causality-based fault localization to identify the `guilty' neurons and 2) optimizes the parameters of the identified neurons to reduce the misbehavior. We have empirically evaluated CARE on various tasks such as backdoor removal, neural network repair for fairness and safety properties. Our experiment results show that CARE is able to repair all neural networks efficiently and effectively. For fairness repair tasks, CARE successfully improves fairness by 61.91%61.91\% on average. For backdoor removal tasks, CARE reduces the attack success rate from over 98%98\% to less than 1%1\%. For safety property repair tasks, CARE reduces the property violation rate to less than 1%1\%. Results also show that thanks to the causality-based fault localization, CARE's repair focuses on the misbehavior and preserves the accuracy of the neural networks

    Regular Methods for Operator Precedence Languages

    Get PDF
    The operator precedence languages (OPLs) represent the largest known subclass of the context-free languages which enjoys all desirable closure and decidability properties. This includes the decidability of language inclusion, which is the ultimate verification problem. Operator precedence grammars, automata, and logics have been investigated and used, for example, to verify programs with arithmetic expressions and exceptions (both of which are deterministic pushdown but lie outside the scope of the visibly pushdown languages). In this paper, we complete the picture and give, for the first time, an algebraic characterization of the class of OPLs in the form of a syntactic congruence that has finitely many equivalence classes exactly for the operator precedence languages. This is a generalization of the celebrated Myhill-Nerode theorem for the regular languages to OPLs. As one of the consequences, we show that universality and language inclusion for nondeterministic operator precedence automata can be solved by an antichain algorithm. Antichain algorithms avoid determinization and complementation through an explicit subset construction, by leveraging a quasi-order on words, which allows the pruning of the search space for counterexample words without sacrificing completeness. Antichain algorithms can be implemented symbolically, and these implementations are today the best-performing algorithms in practice for the inclusion of finite automata. We give a generic construction of the quasi-order needed for antichain algorithms from a finite syntactic congruence. This yields the first antichain algorithm for OPLs, an algorithm that solves the ExpTime-hard language inclusion problem for OPLs in exponential time
    • …
    corecore