324 research outputs found

    A Resolution Prover for Coalition Logic

    Get PDF
    We present a prototype tool for automated reasoning for Coalition Logic, a non-normal modal logic that can be used for reasoning about cooperative agency. The theorem prover CLProver is based on recent work on a resolution-based calculus for Coalition Logic that operates on coalition problems, a normal form for Coalition Logic. We provide an overview of coalition problems and of the resolution-based calculus for Coalition Logic. We then give details of the implementation of CLProver and present the results for a comparison with an existing tableau-based solver

    Clausal reasoning for branching-time logics

    Get PDF
    Computation Tree Logic (CTL) is a branching-time temporal logic whose underlying model of time is a choice of possibilities branching into the future. It has been used in a wide variety of areas in Computer Science and Artificial Intelligence, such as temporal databases, hardware verification, program reasoning, multi-agent systems, and concurrent and distributed systems. In this thesis, firstly we present a refined clausal resolution calculus R�,S CTL for CTL. The calculus requires a polynomial time computable transformation of an arbitrary CTL formula to an equisatisfiable clausal normal form formulated in an extension of CTL with indexed existential path quantifiers. The calculus itself consists of eight step resolution rules, two eventuality resolution rules and two rewrite rules, which can be used as the basis for an EXPTIME decision procedure for the satisfiability problem of CTL. We give a formal semantics for the clausal normal form, establish that the clausal normal form transformation preserves satisfiability, provide proofs for the soundness and completeness of the calculus R�,S CTL, and discuss the complexity of the decision procedure based on R�,S CTL. As R�,S CTL is based on the ideas underlying Bolotov’s clausal resolution calculus for CTL, we provide a comparison between our calculus R�,S CTL and Bolotov’s calculus for CTL in order to show that R�,S CTL improves Bolotov’s calculus in many areas. In particular, our calculus is designed to allow first-order resolution techniques to emulate resolution rules of R�,S CTL so that R�,S CTL can be implemented by reusing any first-order resolution theorem prover. Secondly, we introduce CTL-RP, our implementation of the calculus R�,S CTL. CTL-RP is the first implemented resolution-based theorem prover for CTL. The prover takes an arbitrary CTL formula as input and transforms it into a set of CTL formulae in clausal normal form. Furthermore, in order to use first-order techniques, formulae in clausal normal form are transformed into firstorder formulae, except for those formulae related to eventualities, i.e. formulae containing the eventuality operator 3. To implement step resolution and rewrite rules of the calculus R�,S CTL, we present an approach that uses first-order ordered resolution with selection to emulate the step resolution rules and related proofs. This approach enables us to make use of a first-order theorem prover, which implements the first-order ordered resolution with selection, in order to realise our calculus. Following this approach, CTL-RP utilises the first-order theorem prover SPASS to conduct resolution inferences for CTL and is implemented as a modification of SPASS. In particular, to implement the eventuality resolution rules, CTL-RP augments SPASS with an algorithm, called loop search algorithm for tackling eventualities in CTL. To study the performance of CTL-RP, we have compared CTL-RP with a tableau-based theorem prover for CTL. The experiments show good performance of CTL-RP. i ii ABSTRACT Thirdly, we apply the approach we used to develop R�,S CTL to the development of a clausal resolution calculus for a fragment of Alternating-time Temporal Logic (ATL). ATL is a generalisation and extension of branching-time temporal logic, in which the temporal operators are parameterised by sets of agents. Informally speaking, CTL formulae can be treated as ATL formulae with a single agent. Selective quantification over paths enables ATL to explicitly express coalition abilities, which naturally makes ATL a formalism for specification and verification of open systems and game-like multi-agent systems. In this thesis, we focus on the Next-time fragment of ATL (XATL), which is closely related to Coalition Logic. The satisfiability problem of XATL has lower complexity than ATL but there are still many applications in various strategic games and multi-agent systems that can be represented in and reasoned about in XATL. In this thesis, we present a resolution calculus RXATL for XATL to tackle its satisfiability problem. The calculus requires a polynomial time computable transformation of an arbitrary XATL formula to an equi-satisfiable clausal normal form. The calculus itself consists of a set of resolution rules and rewrite rules. We prove the soundness of the calculus and outline a completeness proof for the calculus RXATL. Also, we intend to extend our calculus RXATL to full ATL in the future

    Betrayal, Distrust, and Rationality: Smart Counter-Collusion Contracts for Verifiable Cloud Computing

    Get PDF
    Cloud computing has become an irreversible trend. Together comes the pressing need for verifiability, to assure the client the correctness of computation outsourced to the cloud. Existing verifiable computation techniques all have a high overhead, thus if being deployed in the clouds, would render cloud computing more expensive than the on-premises counterpart. To achieve verifiability at a reasonable cost, we leverage game theory and propose a smart contract based solution. In a nutshell, a client lets two clouds compute the same task, and uses smart contracts to stimulate tension, betrayal and distrust between the clouds, so that rational clouds will not collude and cheat. In the absence of collusion, verification of correctness can be done easily by crosschecking the results from the two clouds. We provide a formal analysis of the games induced by the contracts, and prove that the contracts will be effective under certain reasonable assumptions. By resorting to game theory and smart contracts, we are able to avoid heavy cryptographic protocols. The client only needs to pay two clouds to compute in the clear, and a small transaction fee to use the smart contracts. We also conducted a feasibility study that involves implementing the contracts in Solidity and running them on the official Ethereum network.Comment: Published in ACM CCS 2017, this is the full version with all appendice

    Data Minimisation in Communication Protocols: A Formal Analysis Framework and Application to Identity Management

    Full text link
    With the growing amount of personal information exchanged over the Internet, privacy is becoming more and more a concern for users. One of the key principles in protecting privacy is data minimisation. This principle requires that only the minimum amount of information necessary to accomplish a certain goal is collected and processed. "Privacy-enhancing" communication protocols have been proposed to guarantee data minimisation in a wide range of applications. However, currently there is no satisfactory way to assess and compare the privacy they offer in a precise way: existing analyses are either too informal and high-level, or specific for one particular system. In this work, we propose a general formal framework to analyse and compare communication protocols with respect to privacy by data minimisation. Privacy requirements are formalised independent of a particular protocol in terms of the knowledge of (coalitions of) actors in a three-layer model of personal information. These requirements are then verified automatically for particular protocols by computing this knowledge from a description of their communication. We validate our framework in an identity management (IdM) case study. As IdM systems are used more and more to satisfy the increasing need for reliable on-line identification and authentication, privacy is becoming an increasingly critical issue. We use our framework to analyse and compare four identity management systems. Finally, we discuss the completeness and (re)usability of the proposed framework

    Baghera Assessment Project, designing an hybrid and emergent educational society

    Get PDF
    Edited by Sophie Soury-Lavergne ; Available at: http://www-leibniz.imag.fr/LesCahiers/2003/Cahier81/BAP_CahiersLaboLeibniz.PDFResearch reportThe Baghera Assessment Project (BAP) has the objective to ex plore a new avenue for the design of e-Learning environments. The key features of BAP's approach are: (i) the concept of emergence in multi-agents systems as modelling framework, (ii) the shaping of a new theoretic al framework for modelling student knowledge, namely the cK¢ model. This new model has been constructed, based on the current research in cognitive science and education, to bridge research on education and research on the design of learning environments

    Completeness of Flat Coalgebraic Fixpoint Logics

    Full text link
    Modal fixpoint logics traditionally play a central role in computer science, in particular in artificial intelligence and concurrency. The mu-calculus and its relatives are among the most expressive logics of this type. However, popular fixpoint logics tend to trade expressivity for simplicity and readability, and in fact often live within the single variable fragment of the mu-calculus. The family of such flat fixpoint logics includes, e.g., LTL, CTL, and the logic of common knowledge. Extending this notion to the generic semantic framework of coalgebraic logic enables covering a wide range of logics beyond the standard mu-calculus including, e.g., flat fragments of the graded mu-calculus and the alternating-time mu-calculus (such as alternating-time temporal logic ATL), as well as probabilistic and monotone fixpoint logics. We give a generic proof of completeness of the Kozen-Park axiomatization for such flat coalgebraic fixpoint logics.Comment: Short version appeared in Proc. 21st International Conference on Concurrency Theory, CONCUR 2010, Vol. 6269 of Lecture Notes in Computer Science, Springer, 2010, pp. 524-53

    Proof Complexity of Modal Resolution Systems

    Get PDF
    In this thesis we initiate the study of the proof complexity of modal resolution systems. To our knowledge there is no previous work on the proof complexity of such systems. This is in sharp contrast to the situation for propositional logic where resolution is the most studied proof system, in part due to its close links with satisfiability solving. We focus primarily on the proof complexity of two recently proposed modal resolution systems of Nalon, Hustadt and Dixon, one of which forms the basis of an existing modal theorem prover. We begin by showing that not only are these two proof systems equivalent in terms of their proof complexity, they are also equivalent to a number of natural refinements. We further compare the proof complexity of these systems with an older, more complicated modal resolution system of Enjalbert and Farinas del Cerro, showing that this older system p-simulates the more streamlined calculi. We then investigate lower bound techniques for modal resolution. Here we see that whilst some propositional lower bound techniques (i.e. feasible interpolation) can be lifted to the modal setting with only minor modifications, other propositional techniques (i.e. size-width) fail completely. We further develop a new lower bound technique for modal resolution using Prover-Delayer games. This technique can be used to establish "genuine" modal lower bounds (i.e lower bounds on the number of modal inferences) for the size of tree-like modal resolution proofs. We apply this technique to a new family of modal formulas, called the modal pigeonhole principle to demonstrate that these formulas require exponential size modal resolution proofs. Finally we compare the proof complexity of tree-like modal resolution systems with that of modal Frege systems, using our modal pigeonhole principle to obtain a "genuinely" modal separation between them

    Explanation and diagnosis services for unsatisfiability and inconsistency in description logics

    Get PDF
    Description Logics (DLs) are a family of knowledge representation formalisms with formal semantics and well understood computational complexities. In recent years, they have found applications in many domains, including domain modeling, software engineering, configuration, and the Semantic Web. DLs have deeply influenced the design and standardization of the Web Ontology Language OWL. The acceptance of OWL as a web standard has reciprocally resulted in the widespread use of DL ontologies on the web. As more applications emerge with increasing complexity, non-standard reasoning services, such as explanation and diagnosis, have become important capabilities that a DL reasoner should provide. For example, unsatisfiability and inconsistency may arise in an ontology due to unintentional design defects or changes in the ontology evolution process. Without explanations, searching for the cause is like looking for a needle in a haystack. It is, therefore, surprising that most of the existing DL reasoners do not provide explanation services; they provide "Yes/No" answers to satisfiability or consistency queries without giving any reasons. This thesis presents our solution for providing explanation and diagnosis services for DL reasoners. We firstly propose a framework based on resolution to explain inconsistency and unsatisfiability in Description Logic. A sound and complete algorithm is developed to generate explanations for the DL language [Special characters omitted.] ALCHI based on the unsatisfiability and inconsistency patterns in [Special characters omitted.] ALCHI . We also develop a technique based on Shapley values to measure inconsistencies in ontologies for diagnosis purposes. This measure is used to identify which axioms in an input ontology or which parts of these axioms need to be repaired in order to make the input consistent. We also investigate optimization techniques to compute the inconsistency measures based on particular properties of DLs. Based on the above theoretical foundations, a running prototype system is implemented to evaluate the practicability of the proposed services. Our preliminary empirical results show that the resolution based explanation framework and the diagnosis procedure based on inconsistency measures can be applied in the real world application

    Formalizing the Metatheory of Logical Calculi and Automatic Provers in Isabelle/HOL (Invited Talk)

    Get PDF
    International audienceIsaFoL (Isabelle Formalization of Logic) is an undertaking that aims at developing formal theories about logics, proof systems, and automatic provers, using Isabelle/HOL. At the heart of the project is the conviction that proof assistants have become mature enough to actually help researchers in automated reasoning when they develop new calculi and tools. In this paper, I describe and reflect on three verification subprojects to which I contributed: a first-order resolution prover, an imperative SAT solver, and generalized term orders for λ-free higher-order logic
    corecore