701 research outputs found

    Applying Formal Methods to Networking: Theory, Techniques and Applications

    Full text link
    Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet which began as a research experiment was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, especially for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification, and an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design---especially, the software defined networking (SDN) paradigm---offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods, and present a survey of its applications to networking.Comment: 30 pages, submitted to IEEE Communications Surveys and Tutorial

    Discovering Classes of Strongly Equivalent Logic Programs

    Full text link
    In this paper we apply computer-aided theorem discovery technique to discover theorems about strongly equivalent logic programs under the answer set semantics. Our discovered theorems capture new classes of strongly equivalent logic programs that can lead to new program simplification rules that preserve strong equivalence. Specifically, with the help of computers, we discovered exact conditions that capture the strong equivalence between a rule and the empty set, between two rules, between two rules and one of the two rules, between two rules and another rule, and between three rules and two of the three rules

    Automated Theorem Proving for General Game Playing

    Get PDF
    While automated game playing systems like Deep Blue perform excellent within their domain, handling a different game or even a slight change of rules is impossible without intervention of the programmer. Considered a great challenge for Artificial Intelligence, General Game Playing is concerned with the development of techniques that enable computer programs to play arbitrary, possibly unknown n-player games given nothing but the game rules in a tailor-made description language. A key to success in this endeavour is the ability to reliably extract hidden game-specific features from a given game description automatically. An informed general game player can efficiently play a game by exploiting structural game properties to choose the currently most appropriate algorithm, to construct a suited heuristic, or to apply techniques that reduce the search space. In addition, an automated method for property extraction can provide valuable assistance for the discovery of specification bugs during game design by providing information about the mechanics of the currently specified game description. The recent extension of the description language to games with incomplete information and elements of chance further induces the need for the detection of game properties involving player knowledge in several stages of the game. In this thesis, we develop a formal proof method for the automatic acquisition of rich game-specific invariance properties. To this end, we first introduce a simple yet expressive property description language to address knowledge-free game properties which may involve arbitrary finite sequences of successive game states. We specify a semantic based on state transition systems over the Game Description Language, and develop a provably correct formal theory which allows to show the validity of game properties with respect to their semantic across all reachable game states. Our proof theory does not require to visit every single reachable state. Instead, it applies an induction principle on the game rules based on the generation of answer set programs, allowing to apply any off-the-shelf answer set solver to practically verify invariance properties even in complex games whose state space cannot totally be explored. To account for the recent extension of the description language to games with incomplete information and elements of chance, we correctly extend our induction method to properties involving player knowledge. With an extensive evaluation we show its practical applicability even in complex games

    Learning Temporal Properties from Data Streams

    Get PDF
    Verifying that systems behave as expected is a cornerstone of computing. In formal verification approaches, engineers capture their intentions, or specifications, mathematically, often using logic. The verification task is then to confirm that the system satisfies its specifications. In a formal setting this endeavor typically involves the construction of mathematical proofs, which are either constructed automatically, as in the case of so-called model-checking techniques, or by humans with machine assistance, as in the case of theorem-proving-based methodologies. In practice, formal verification faces a number of obstacles. One involves the construction of formal specifications in the first place. Another is the lack of availability of analyzable system artifacts (source code, executables) about which proofs can be constructed. In this dissertation we propose techniques for inferring formal specifications, in the form of Linear Temporal Logic formulas, from system executions, and models when these are available, as a means of addressing these concerns. We first illustrate how guided generation of test cases (i.e. guided exploration of the system's input/output space) can be leveraged to develop and then refine the hypothesis set of invariants that a system satisfies. This approach was successful in revealing a system property required in code specification of an automotive application provided to us but missing in the implementation. An unexpected (undocumented) specification was also discovered through our analysis. The approach has since been applied by other researchers to several other automotive applications. Second, we develop techniques to mine properties from models of systems and/or their executions. In some cases compact, finite state representations of a system is available. In this scenario we employ a novel automaton-based approach to mine properties matching a user-specified template. In other cases such white-box knowledge is not available and we must work over executions of the system rather than the system itself. In this scenario we apply a novel variant of Linear Temporal Logic (LTL) using finite-sequence semantics to again mine properties based on a specified template. Lastly, we consider situations where standard formal properties are insufficient due to uncertainty or being overly simplified. Oftentimes properties that are not satisfied 100\% of the time can be very interesting during a system inspection or system redesign. For example, natural noise manifesting in data streams from system executions such as missing evidence and changing system behavior could lead to significant properties being overlooked if a strict ``exactitude'' were enforced. We explore the notion of ``incomplete'' properties which we term \emph{partial invariants} by formulating a new ``Noisy Linear Temporal Logic'' which is an extension of standard LTL. We consider several representations of such noise and show decidability of the language emptiness problem for some of the variants

    Technological roadmap on AI planning and scheduling

    Get PDF
    At the beginning of the new century, Information Technologies had become basic and indispensable constituents of the production and preparation processes for all kinds of goods and services and with that are largely influencing both the working and private life of nearly every citizen. This development will continue and even further grow with the continually increasing use of the Internet in production, business, science, education, and everyday societal and private undertaking. Recent years have shown, however, that a dramatic enhancement of software capabilities is required, when aiming to continuously provide advanced and competitive products and services in all these fast developing sectors. It includes the development of intelligent systems – systems that are more autonomous, flexible, and robust than today’s conventional software. Intelligent Planning and Scheduling is a key enabling technology for intelligent systems. It has been developed and matured over the last three decades and has successfully been employed for a variety of applications in commerce, industry, education, medicine, public transport, defense, and government. This document reviews the state-of-the-art in key application and technical areas of Intelligent Planning and Scheduling. It identifies the most important research, development, and technology transfer efforts required in the coming 3 to 10 years and shows the way forward to meet these challenges in the short-, medium- and longer-term future. The roadmap has been developed under the regime of PLANET – the European Network of Excellence in AI Planning. This network, established by the European Commission in 1998, is the co-ordinating framework for research, development, and technology transfer in the field of Intelligent Planning and Scheduling in Europe. A large number of people have contributed to this document including the members of PLANET non- European international experts, and a number of independent expert peer reviewers. All of them are acknowledged in a separate section of this document. Intelligent Planning and Scheduling is a far-reaching technology. Accepting the challenges and progressing along the directions pointed out in this roadmap will enable a new generation of intelligent application systems in a wide variety of industrial, commercial, public, and private sectors

    Learning Tractable Probabilistic Models for Fault Localization

    Full text link
    In recent years, several probabilistic techniques have been applied to various debugging problems. However, most existing probabilistic debugging systems use relatively simple statistical models, and fail to generalize across multiple programs. In this work, we propose Tractable Fault Localization Models (TFLMs) that can be learned from data, and probabilistically infer the location of the bug. While most previous statistical debugging methods generalize over many executions of a single program, TFLMs are trained on a corpus of previously seen buggy programs, and learn to identify recurring patterns of bugs. Widely-used fault localization techniques such as TARANTULA evaluate the suspiciousness of each line in isolation; in contrast, a TFLM defines a joint probability distribution over buggy indicator variables for each line. Joint distributions with rich dependency structure are often computationally intractable; TFLMs avoid this by exploiting recent developments in tractable probabilistic models (specifically, Relational SPNs). Further, TFLMs can incorporate additional sources of information, including coverage-based features such as TARANTULA. We evaluate the fault localization performance of TFLMs that include TARANTULA scores as features in the probabilistic model. Our study shows that the learned TFLMs isolate bugs more effectively than previous statistical methods or using TARANTULA directly.Comment: Fifth International Workshop on Statistical Relational AI (StaR-AI 2015

    Resolution-based methods for linear temporal reasoning

    Get PDF
    The aim of this thesis is to explore the potential of resolution-based methods for linear temporal reasoning. On the abstract level, this means to develop new algorithms for automated reasoning about properties of systems which evolve in time. More concretely, we will: 1) show how to adapt the superposition framework to proving theorems in propositional Linear Temporal Logic (LTL), 2) use a connection between superposition and the CDCL calculus of modern SAT solvers to come up with an efficient LTL prover, 3) specialize the previous to reachability properties and discover a close connection to Property Directed Reachability (PDR), an algorithm recently developed for model checking of hardware circuits, 4) further improve PDR by providing a new technique for enhancing clause propagation phase of the algorithm, and 5) adapt PDR to automated planning by replacing the SAT solver inside with a planning-specific procedure. We implemented the proposed ideas and provide experimental results which demonstrate their practical potential on representative benchmark sets. Our system LS4 is shown to be the strongest LTL prover currently publicly available. The mentioned enhancement of PDR substantially improves the performance of our implementation of the algorithm for hardware model checking in the multi-property setting. It is expected that other implementations would benefit from it in an analogous way. Finally, our planner PDRplan has been compared with the state-of-the-art planners on the benchmarks from the International Planning Competition with very promising results.Das Ziel dieser Doktorarbeit ist es, das Potential resolutionsbasierter Methoden zur linearer, temporaler Beweisführung zu untersuchen. Von einem abstrakten Gesichtspunkt aus gesehen bedeutet dies, neue Algorithmen über die Eigenschaften von sich zeitlich entwicklenden Systemen im Bereich des automatischen Theorembeweisens zu entwickeln. Konkreter gesagt werden wir 1) aufzeigen, wie sich das Rahmenprogramm der Superposition so anpassen lässt, damit es Theoreme in propositionaler Linear Temporal Logic (LTL) beweist, 2) eine Verbindung zwischen der Superposition und dem CDCL-Kalkül moderner SAT-Solver nutzen, um mit einem effizienten LTL-Prover aufzuwarten, 3) das Vorangegangene auf Erreichbarkeitseigenschaften spezialisieren, und eine starke Verbindung zu der Property Directed Reachability (PDR), einem jüngst eintwickeltem Model-Checking-Algorithmus für Hardware-Schaltkreise, aufzudecken, 4) PDR durch die Einführung neuer Technik verbessern, die die Clause-Propagation-Phase des Algorithmus beschleunigt, und 5) PDR für das automatisierte Planen anpassen, indem wir den inneren SAT-Solver durch eine planungsspezifische Prozedur ersetzen. Wir haben die vorgeschlagenen Ideen implementiert, und es werden experimentelle Ergebnisse angegeben, die das praktische Potential dieser Ideen auf repräsentativen Benchmarks aufzeigt. Es hat sich herausgestellt, dass unser System LS4 der staerkste öffentlich zugängliche LTL-Prover ist. Die erwähnte Erweiterung von PDR verbessern die Leistungsfähigkeit unserer Implementierung des Hardware-Model-Checking-Algorithmus substantiell im Bereich der Multi-Property-Einstellungen. Wir erwarten, dass andere Implementierungen in ähnlicher Weise profitieren würden. Schließlich haben wir viel versprechende Ergebnisse durch den Vergleich unser Planer PDRplan mit anderen state-of-the-art Planer auf den Benchmarks der International Planning Competition erzielt

    Formal Methods Specification and Analysis Guidebook for the Verification of Software and Computer Systems

    Get PDF
    This guidebook, the second of a two-volume series, is intended to facilitate the transfer of formal methods to the avionics and aerospace community. The 1st volume concentrates on administrative and planning issues [NASA-95a], and the second volume focuses on the technical issues involved in applying formal methods to avionics and aerospace software systems. Hereafter, the term "guidebook" refers exclusively to the second volume of the series. The title of this second volume, A Practitioner's Companion, conveys its intent. The guidebook is written primarily for the nonexpert and requires little or no prior experience with formal methods techniques and tools. However, it does attempt to distill some of the more subtle ingredients in the productive application of formal methods. To the extent that it succeeds, those conversant with formal methods will also nd the guidebook useful. The discussion is illustrated through the development of a realistic example, relevant fragments of which appear in each chapter. The guidebook focuses primarily on the use of formal methods for analysis of requirements and high-level design, the stages at which formal methods have been most productively applied. Although much of the discussion applies to low-level design and implementation, the guidebook does not discuss issues involved in the later life cycle application of formal methods
    corecore