6 research outputs found

    On the semantics of Alice&Bob specifications of security protocols

    Get PDF
    AbstractIn the context of security protocols, the so-called Alice&Bob notation is often used to describe the messages exchanged between honest principals in successful protocol runs. While intuitive, this notation is ambiguous in its description of the actions taken by principals, in particular with respect to the conditions they must check when executing their roles and the actions they must take when the checks fail.In this paper, we investigate the semantics of protocol specifications in Alice&Bob notation. We provide both a denotational and an operational semantics for such specifications, rigorously accounting for these conditions and actions. Our denotational semantics is based on a notion of incremental symbolic runs, which reflect the data possessed by principals and how this data increases monotonically during protocol execution. We contrast this with a standard formalization of the behavior of principals, which directly interprets message exchanges as sequences of atomic actions. In particular, we provide a complete characterization of the situations where this simpler, direct approach is adequate and prove that incremental symbolic runs are more expressive in general. Our operational semantics, which is guided by the denotational semantics, implements each role of the specified protocol as a sequential process of the pattern-matching spi calculus

    Metareasoning about Security Protocols using Distributed Temporal Logic

    No full text
    We introduce a version of distributed temporal logic for rigorously formalizing and proving metalevel properties of different protocol models, and establishing relationships between models. The resulting logic is quite expressive and provides a natural, intuitive language for formalizing both local (agent specific) and global properties of distributed communicating processes. Through a sequence of examples, we show how this logic may be applied to formalize and establish the correctness of different modeling and simplification techniques, which play a role in building effective protocol tools

    Metareasoning about Security Protocols using Distributed Temporal Logic

    Get PDF
    We introduce a version of distributed temporal logic that provides a new basis to rigorously investigate general metalevel properties of di#erent protocol models, by establishing modeling and analysis simplification techniques that may contribute to the sound design of protocol validation tools. As a first but significant example, we give a rigorous account of three such techniques

    Model checking and compositional reasoning for multi-agent systems

    No full text
    Multi-agent systems are distributed systems containing interacting autonomous agents designed to achieve shared and private goals. For safety-critical systems where we wish to replace a human role with an autonomous entity, we need to make assurances about the correctness of the autonomous delegate. Specialised techniques have been proposed recently for the verification of agents against mentalistic logics. Problematically, these approaches treat the system in a monolithic way. When verifying a property against a single agent, the approaches examine all behaviours of every component in the system. This is both inefficient and can lead to intractability: the so-called state-space explosion problem. In this thesis, we consider techniques to support the verification of agents in isolation. We avoid the state-space explosion problem by verifying an individual agent in the context of a specification of the rest of the system, rather than the system itself. We show that it is possible to verify an agent against its desired properties without needing to consider the behaviours of the remaining components. We first introduce a novel approach for verifying a system as a whole against specifications expressed in a logic of time and knowledge. The technique, based on automata over trees, supports an efficient procedure to verify systems in an automata-theoretic way using language containment. We show how the automata-theoretic approach can be used as an underpinning for assume-guarantee reasoning for multi-agent systems. We use a temporal logic of actions to specify the expected behaviour of the other components in the system. When performing modular verification, this specification is used to exclude behaviours that are inconsistent with the concrete system. We implement both approaches within the open-source model checker MCMAS and show that, for the relevant properties, the assume-guarantee approach can significantly increase the tractability of individual agent verification.Open Acces
    corecore