1,349 research outputs found

    Proceedings of the 8th Scandinavian Logic Symposium

    Get PDF

    New Directions in Model Checking Dynamic Epistemic Logic

    Get PDF
    Dynamic Epistemic Logic (DEL) can model complex information scenarios in a way that appeals to logicians. However, its existing implementations are based on explicit model checking which can only deal with small models, so we do not know how DEL performs for larger and real-world problems. For temporal logics, in contrast, symbolic model checking has been developed and successfully applied, for example in protocol and hardware verification. Symbolic model checkers for temporal logics are very efficient and can deal with very large models. In this thesis we build a bridge: new faithful representations of DEL models as so-called knowledge and belief structures that allow for symbolic model checking. For complex epistemic and factual change we introduce transformers, a symbolic replacement for action models. Besides a detailed explanation of the theory, we present SMCDEL: a Haskell implementation of symbolic model checking for DEL using Binary Decision Diagrams. Our new methods can solve well-known benchmark problems in epistemic scenarios much faster than existing methods for DEL. We also compare its performance to to existing model checkers for temporal logics and show that DEL can compete with established frameworks. We zoom in on two specific variants of DEL for concrete applications. First, we introduce Public Inspection Logic, a new framework for the knowledge of variables and its dynamics. Second, we study the dynamic gossip problem and how it can be analyzed with epistemic logic. We show that existing gossip protocols can be improved, but that no perfect strengthening of "Learn New Secrets" exists

    Model checking multi-agent systems

    Get PDF
    A multi-agent system (MAS) is usually understood as a system composed of interacting autonomous agents. In this sense, MAS have been employed successfully as a modelling paradigm in a number of scenarios, especially in Computer Science. However, the process of modelling complex and heterogeneous systems is intrinsically prone to errors: for this reason, computer scientists are typically concerned with the issue of verifying that a system actually behaves as it is supposed to, especially when a system is complex. Techniques have been developed to perform this task: testing is the most common technique, but in many circumstances a formal proof of correctness is needed. Techniques for formal verification include theorem proving and model checking. Model checking techniques, in particular, have been successfully employed in the formal verification of distributed systems, including hardware components, communication protocols, security protocols. In contrast to traditional distributed systems, formal verification techniques for MAS are still in their infancy, due to the more complex nature of agents, their autonomy, and the richer language used in the specification of properties. This thesis aims at making a contribution in the formal verification of properties of MAS via model checking. In particular, the following points are addressed: • Theoretical results about model checking methodologies for MAS, obtained by extending traditional methodologies based on Ordered Binary Decision Diagrams (OBDDS) for temporal logics to multi-modal logics for time, knowledge, correct behaviour, and strategies of agents. Complexity results for model checking these logics (and their symbolic representations). • Development of a software tool (MCMAS) that permits the specification and verification of MAS described in the formalism of interpreted systems. • Examples of application of MCMAS to various MAS scenarios (communication, anonymity, games, hardware diagnosability), including experimental results, and comparison with other tools available

    Modeling and Verifying Probabilistic Social Commitments in Multi-Agent Systems

    Get PDF
    Interaction among autonomous agents in Multi-Agent Systems (MASs) is the key aspect for solving complex problems that an individual agent cannot handle alone. In this context, social approaches, as opposed to the mental approaches, have recently received a considerable attention in the area of agent communication. They exploit observable social commitments to develop a verifiable formal semantics by which communication protocols can be specified. However, existing approaches for defining social commitments tend to assume an absolute guarantee of correctness so that systems run in a certain manner. That is, social commitments have always been modeled with the assumption of certainty. Moreover, the widespread use of MASs increases the interest to explore the interactions between different aspects of the participating agents such as the interaction between agents’ knowledge and social commitments in the presence of uncertainty. This results in having a gap, in the literature of agent communication, on modeling and verifying social commitments in probabilistic settings. In this thesis, we aim to address the above-mentioned problems by presenting a practical formal framework that is capable of handling the problem of uncertainty in social commitments. First, we develop an approach for representing, reasoning about, and verifying probabilistic social commitments in MASs. This includes defining a new logic called the probabilistic logic of commitments (PCTLC), and a reduction-based model checking procedure for verifying the proposed logic. In the reduction technique, the problem of model checking PCTLC is transformed into the problem of model checking PCTL so that the use of the PRISM (Probabilistic Symbolic Model Checker) is made possible. Formulae of PCTLC are interpreted over an extended version of the probabilistic interpreted systems formalism. Second, we extend the work we proposed for probabilistic social commitments to be able to capture and verify the interactions between knowledge and commitments. Properties representing the interactions between the two aspects are expressed in a new developed logic called the probabilistic logic of knowledge and commitment (PCTLkc). Third, we develop an adequate semantics for the group social commitments, for the first time in the literature, and integrate it into the framework. We then introduce an improved version of PCTLkc and extend it with operators for the group knowledge and group social commitments. The new refined logic is called PCTLkc+. In each of the latter stages, we respectively develop a new version of the probabilistic interpreted systems over which the presented logic is interpreted, and introduce a new reduction-based verification technique to verify the proposed logic. To evaluate our proposed work, we implement the proposed verification techniques on top of the PRISM model checker and apply them on several case studies. The results demonstrate the usefulness and effectiveness of our proposed work

    Model Checking Trust-based Multi-Agent Systems

    Get PDF
    Trust has been the focus of many research projects, both theoretical and practical, in the recent years, particularly in domains where open multi-agent technologies are applied (e.g., Internet-based markets, Information retrieval, etc.). The importance of trust in such domains arises mainly because it provides a social control that regulates the relationships and interactions among agents. Despite the growing number of various multi-agent applications, they still encounter many challenges in their formal modeling and the verification of agents’ behaviors. Many formalisms and approaches that facilitate the specifications of trust in Multi-Agent Systems (MASs) can be found in the literature. However, most of these approaches focus on the cognitive side of trust where the trusting entity is normally capable of exhibiting properties about beliefs, desires, and intentions. Hence, the trust is considered as a belief of an agent (the truster) involving ability and willingness of the trustee to perform some actions for the truster. Nevertheless, in open MASs, entities can join and leave the interactions at any time. This means MASs will actually provide no guarantee about the behavior of their agents, which makes the capability of reasoning about trust and checking the existence of untrusted computations highly desired. This thesis aims to address the problem of modeling and verifying at design time trust in MASs by (1) considering a cognitive-independent view of trust where trust ingredients are seen from a non-epistemic angle, (2) introducing a logical language named Trust Computation Tree Logic (TCTL), which extends CTL with preconditional, conditional, and graded trust operators along with a set of reasoning postulates in order to explore its capabilities, (3) proposing a new accessibility relation which is needed to define the semantics of the trust modal operators. This accessibility relation is defined so that it captures the intuition of trust while being easily computable, (4) investigating the most intuitive and efficient algorithm for computing the trust set by developing, implementing, and experimenting different model checking techniques in order to compare between them in terms of memory consumption, efficiency, and scalability with regard to the number of considered agents, (5) evaluating the performance of the model checking techniques by analyzing the time and space complexity. The approach has been applied to different application domains to evaluate its computational performance and scalability. The obtained results reveal the effectiveness of the proposed approach, making it a promising methodology in practice

    Model checking ontology-driven reasoning agents using strategy and abstraction

    Get PDF
    We present a framework for the modelling, specification and verification of ontology-driven multi-agent rule-based systems (MASs). We assume that each agent executes in a separate process and that they communicate via message passing. The proposed approach makes use of abstract specifications to model the behaviour of some of the agents in the system, and exploits information about the reasoning strategy adopted by the agents. Abstract specifications are given as Linear Temporal Logic (LTL) formulas which describe the external behaviour of the agents, allowing their temporal behaviour to be compactly modelled. Both abstraction and strategy have been combined in an automated model checking encoding tool Tovrba for rule-based multi-agent systems which allows the system designer to specify information about agents' interaction, behaviour, and execution strategy at different levels of abstraction. The Tovrba tool generates an encoding of the system for the Maude LTL model checker, allowing properties of the system to be verified

    Mathematical Logic: Proof theory, Constructive Mathematics

    Get PDF
    The workshop “Mathematical Logic: Proof Theory, Constructive Mathematics” was centered around proof-theoretic aspects of current mathematics, constructive mathematics and logical aspects of computational complexit
    • …
    corecore