2,241 research outputs found

    A Short Counterexample Property for Safety and Liveness Verification of Fault-tolerant Distributed Algorithms

    Full text link
    Distributed algorithms have many mission-critical applications ranging from embedded systems and replicated databases to cloud computing. Due to asynchronous communication, process faults, or network failures, these algorithms are difficult to design and verify. Many algorithms achieve fault tolerance by using threshold guards that, for instance, ensure that a process waits until it has received an acknowledgment from a majority of its peers. Consequently, domain-specific languages for fault-tolerant distributed systems offer language support for threshold guards. We introduce an automated method for model checking of safety and liveness of threshold-guarded distributed algorithms in systems where the number of processes and the fraction of faulty processes are parameters. Our method is based on a short counterexample property: if a distributed algorithm violates a temporal specification (in a fragment of LTL), then there is a counterexample whose length is bounded and independent of the parameters. We prove this property by (i) characterizing executions depending on the structure of the temporal formula, and (ii) using commutativity of transitions to accelerate and shorten executions. We extended the ByMC toolset (Byzantine Model Checker) with our technique, and verified liveness and safety of 10 prominent fault-tolerant distributed algorithms, most of which were out of reach for existing techniques.Comment: 16 pages, 11 pages appendi

    Learning to Prove Theorems via Interacting with Proof Assistants

    Full text link
    Humans prove theorems by relying on substantial high-level reasoning and problem-specific insights. Proof assistants offer a formalism that resembles human mathematical reasoning, representing theorems in higher-order logic and proofs as high-level tactics. However, human experts have to construct proofs manually by entering tactics into the proof assistant. In this paper, we study the problem of using machine learning to automate the interaction with proof assistants. We construct CoqGym, a large-scale dataset and learning environment containing 71K human-written proofs from 123 projects developed with the Coq proof assistant. We develop ASTactic, a deep learning-based model that generates tactics as programs in the form of abstract syntax trees (ASTs). Experiments show that ASTactic trained on CoqGym can generate effective tactics and can be used to prove new theorems not previously provable by automated methods. Code is available at https://github.com/princeton-vl/CoqGym.Comment: Accepted to ICML 201

    Bisimulations and Logical Characterizations on Continuous-time Markov Decision Processes

    Full text link
    In this paper we study strong and weak bisimulation equivalences for continuous-time Markov decision processes (CTMDPs) and the logical characterizations of these relations with respect to the continuous-time stochastic logic (CSL). For strong bisimulation, it is well known that it is strictly finer than CSL equivalence. In this paper we propose strong and weak bisimulations for CTMDPs and show that for a subclass of CTMDPs, strong and weak bisimulations are both sound and complete with respect to the equivalences induced by CSL and the sub-logic of CSL without next operator respectively. We then consider a standard extension of CSL, and show that it and its sub-logic without X can be fully characterized by strong and weak bisimulations respectively over arbitrary CTMDPs.Comment: The conference version of this paper was published at VMCAI 201

    Model checking probabilistic and stochastic extensions of the pi-calculus

    Get PDF
    We present an implementation of model checking for probabilistic and stochastic extensions of the pi-calculus, a process algebra which supports modelling of concurrency and mobility. Formal verification techniques for such extensions have clear applications in several domains, including mobile ad-hoc network protocols, probabilistic security protocols and biological pathways. Despite this, no implementation of automated verification exists. Building upon the pi-calculus model checker MMC, we first show an automated procedure for constructing the underlying semantic model of a probabilistic or stochastic pi-calculus process. This can then be verified using existing probabilistic model checkers such as PRISM. Secondly, we demonstrate how for processes of a specific structure a more efficient, compositional approach is applicable, which uses our extension of MMC on each parallel component of the system and then translates the results into a high-level modular description for the PRISM tool. The feasibility of our techniques is demonstrated through a number of case studies from the pi-calculus literature

    Towards verification of computation orchestration

    Get PDF
    Recently, a promising programming model called Orc has been proposed to support a structured way of orchestrating distributed Web Services. Orc is intuitive because it offers concise constructors to manage concurrent communication, time-outs, priorities, failure of Web Services or communication and so forth. The semantics of Orc is precisely defined. However, there is no automatic verification tool available to verify critical properties against Orc programs. Our goal is to verify the orchestration programs (written in Orc language) which invoke web services to achieve certain goals. To investigate this problem and build useful tools, we explore in two directions. Firstly, we define a Timed Automata semantics for the Orc language, which we prove is semantically equivalent to the operational semantics of Orc. Consequently, Timed Automata models are systematically constructed from Orc programs. The practical implication is that existing tool supports for Timed Automata, e.g., Uppaal, can be used to simulate and model check Orc programs. An experimental tool has been implemented to automate this approach. Secondly, we start with encoding the operational semantics of Orc language in Constraint Logic Programming (CLP), which allows a systematic translation from Orc to CLP. Powerful constraint solvers like CLP(R) are then used to prove traditional safety properties and beyond, e.g., reachability, deadlock-freeness, lower or upper bound of a time interval, etc. Counterexamples are generated when properties are not satisfied. Furthermore, the stepwise execution traces can be automatically generated as the simulation steps. The two different approaches give an insight into the verification problem of Web Service orchestration. The Timed Automata approach has its merits in visualized simulation and efficient verification supported by the well developed tools. On the other hand, the CPL approach gives better expressiveness in both modeling and verification. The two approaches complement each other, which gives a complete solution for the simulation and verification of Computation Orchestration

    A Linear-Time Nominal ?-Calculus with Name Allocation

    Get PDF
    Logics and automata models for languages over infinite alphabets, such as Freeze LTL and register automata, serve the verification of processes or documents with data. They relate tightly to formalisms over nominal sets, such as nondetermininistic orbit-finite automata (NOFAs), where names play the role of data. Reasoning problems in such formalisms tend to be computationally hard. Name-binding nominal automata models such as {regular nondeterministic nominal automata (RNNAs)} have been shown to be computationally more tractable. In the present paper, we introduce a linear-time fixpoint logic Bar-?TL} for finite words over an infinite alphabet, which features full negation and freeze quantification via name binding. We show by a nontrivial reduction to extended regular nondeterministic nominal automata that even though Bar-?TL} allows unrestricted nondeterminism and unboundedly many registers, model checking Bar-?TL} over RNNAs and satisfiability checking both have elementary complexity. For example, model checking is in 2ExpSpace, more precisely in parametrized ExpSpace, effectively with the number of registers as the parameter

    Parameterized Reachability Graph for Software Model Checking Based on PDNet

    Get PDF
    Model checking is a software automation verification technique. However, the complex execution process of concurrent software systems and the exhaustive search of state space make the model-checking technique limited by the state-explosion problem in real applications. Due to the uncertain input information (called system parameterization) in concurrent software systems, the state-explosion problem in model checking is exacerbated. To address the problem that reachability graphs of Petri net are difficult to construct and cannot be explored exhaustively due to system parameterization, this paper introduces parameterized variables into the program dependence net (a concurrent program model). Then, it proposes a parameterized reachability graph generation algorithm, including decision algorithms for verifying the properties. We implement LTL-x verification based on parameterized reachability graphs and solve the problem of difficulty constructing reachability graphs caused by uncertain inputs

    Proving Differential Privacy with Shadow Execution

    Full text link
    Recent work on formal verification of differential privacy shows a trend toward usability and expressiveness -- generating a correctness proof of sophisticated algorithm while minimizing the annotation burden on programmers. Sometimes, combining those two requires substantial changes to program logics: one recent paper is able to verify Report Noisy Max automatically, but it involves a complex verification system using customized program logics and verifiers. In this paper, we propose a new proof technique, called shadow execution, and embed it into a language called ShadowDP. ShadowDP uses shadow execution to generate proofs of differential privacy with very few programmer annotations and without relying on customized logics and verifiers. In addition to verifying Report Noisy Max, we show that it can verify a new variant of Sparse Vector that reports the gap between some noisy query answers and the noisy threshold. Moreover, ShadowDP reduces the complexity of verification: for all of the algorithms we have evaluated, type checking and verification in total takes at most 3 seconds, while prior work takes minutes on the same algorithms.Comment: 23 pages, 12 figures, PLDI'1
    • …
    corecore