48 research outputs found

    Proof Theory, Transformations, and Logic Programming for Debugging Security Protocols

    Get PDF
    We define a sequent calculus to formally specify, simulate, debug and verify security protocols. In our sequents we distinguish between the current knowledge of principals and the current global state of the session. Hereby, we can describe the operational semantics of principals and of an intruder in a simple and modular way. Furthermore, using proof theoretic tools like the analysis of permutability of rules, we are able to find efficient proof strategies that we prove complete for special classes of security protocols including Needham-Schroeder. Based on the results of this preliminary analysis, we have implemented a Prolog meta-interpreter which allows for rapid prototyping and for checking safety properties of security protocols, and we have applied it for finding error traces and proving correctness of practical examples

    Fair Exchange in Strand Spaces

    Full text link
    Many cryptographic protocols are intended to coordinate state changes among principals. Exchange protocols coordinate delivery of new values to the participants, e.g. additions to the set of values they possess. An exchange protocol is fair if it ensures that delivery of new values is balanced: If one participant obtains a new possession via the protocol, then all other participants will, too. Fair exchange requires progress assumptions, unlike some other protocol properties. The strand space model is a framework for design and verification of cryptographic protocols. A strand is a local behavior of a single principal in a single session of a protocol. A bundle is a partially ordered global execution built from protocol strands and adversary activities. The strand space model needs two additions for fair exchange protocols. First, we regard the state as a multiset of facts, and we allow strands to cause changes in this state via multiset rewriting. Second, progress assumptions stipulate that some channels are resilient-and guaranteed to deliver messages-and some principals are assumed not to stop at certain critical steps. This method leads to proofs of correctness that cleanly separate protocol properties, such as authentication and confidentiality, from invariants governing state evolution. G. Wang's recent fair exchange protocol illustrates the approach

    Collection analysis for Horn clause programs

    Get PDF
    We consider approximating data structures with collections of the items that they contain. For examples, lists, binary trees, tuples, etc, can be approximated by sets or multisets of the items within them. Such approximations can be used to provide partial correctness properties of logic programs. For example, one might wish to specify than whenever the atom sort(t,s)sort(t,s) is proved then the two lists tt and ss contain the same multiset of items (that is, ss is a permutation of tt). If sorting removes duplicates, then one would like to infer that the sets of items underlying tt and ss are the same. Such results could be useful to have if they can be determined statically and automatically. We present a scheme by which such collection analysis can be structured and automated. Central to this scheme is the use of linear logic as a omputational logic underlying the logic of Horn clauses

    Soft Constraint Programming to Analysing Security Protocols

    Full text link
    Security protocols stipulate how the remote principals of a computer network should interact in order to obtain specific security goals. The crucial goals of confidentiality and authentication may be achieved in various forms, each of different strength. Using soft (rather than crisp) constraints, we develop a uniform formal notion for the two goals. They are no longer formalised as mere yes/no properties as in the existing literature, but gain an extra parameter, the security level. For example, different messages can enjoy different levels of confidentiality, or a principal can achieve different levels of authentication with different principals. The goals are formalised within a general framework for protocol analysis that is amenable to mechanisation by model checking. Following the application of the framework to analysing the asymmetric Needham-Schroeder protocol, we have recently discovered a new attack on that protocol as a form of retaliation by principals who have been attacked previously. Having commented on that attack, we then demonstrate the framework on a bigger, largely deployed protocol consisting of three phases, Kerberos.Comment: 29 pages, To appear in Theory and Practice of Logic Programming (TPLP) Paper for Special Issue (Verification and Computational Logic

    Relating Process Algebras and Multiset Rewriting for Security Protocol Analysis

    Get PDF
    When formalizing security protocols, different specification languages support very different reasoning methodologies, whose results are not directly or easily comparable. Therefore, establishing clear relationships among different frameworks is highly desirable, as it permits various methodologies to cooperate by interpreting theoretical and practical results of one system in another. In this paper, we examine the nontrivial relationship between two general verification frameworks: multiset rewriting (MSR) and a process algebra (PA) inspired to the CCS and the -calculus. We present two separate mappings, one from MSR to PA and the other from PA to MSR. Although defining a simple and general bijection between MSR and PA appears difficult, we show that in the specific context of cryptographic protocols they do admit effective translations that preserve trace

    Model Checking Linear Logic Specifications

    Full text link
    The overall goal of this paper is to investigate the theoretical foundations of algorithmic verification techniques for first order linear logic specifications. The fragment of linear logic we consider in this paper is based on the linear logic programming language called LO enriched with universally quantified goal formulas. Although LO was originally introduced as a theoretical foundation for extensions of logic programming languages, it can also be viewed as a very general language to specify a wide range of infinite-state concurrent systems. Our approach is based on the relation between backward reachability and provability highlighted in our previous work on propositional LO programs. Following this line of research, we define here a general framework for the bottom-up evaluation of first order linear logic specifications. The evaluation procedure is based on an effective fixpoint operator working on a symbolic representation of infinite collections of first order linear logic formulas. The theory of well quasi-orderings can be used to provide sufficient conditions for the termination of the evaluation of non trivial fragments of first order linear logic.Comment: 53 pages, 12 figures "Under consideration for publication in Theory and Practice of Logic Programming
    corecore