1,159 research outputs found
Automating Security Protocol Analysis
When Roger Needham and Michael Schroeder first introduced a seemingly secure protocol 24, it took over 18 years to discover that even with the most secure encryption, the conversations using this protocol were still subject to penetration. To date, there is still no one protocol that is accepted for universal use. Because of this, analysis of the protocol outside the encryption is becoming more important. Recent work by Joshua Guttman and others 9 have identified several properties that good protocols often exhibit. Termed Authentication Tests, these properties have been very useful in examining protocols. The purpose of this research is to automate these tests and thus help expedite the analysis of both existing and future protocols. The success of this research is shown through rapid analysis of numerous protocols for the existence of authentication tests. The result of this is that an analyst is now able to ascertain in near real-time whether or not a proposed protocol is of a sound design or whether an existing protocol may contain previously unknown weaknesses. The other achievement of this research is the generality of the input process involved. Although there exist other protocol analyzers, their use is limited primarily due to their complexity of use. With the tool generated here, an analyst needs only to enter their protocol into a standard text file; and almost immediately, the analyzer determines the existence of the authentication tests
Dynamics, robustness and fragility of trust
Trust is often conveyed through delegation, or through recommendation. This
makes the trust authorities, who process and publish trust recommendations,
into an attractive target for attacks and spoofing. In some recent empiric
studies, this was shown to lead to a remarkable phenomenon of *adverse
selection*: a greater percentage of unreliable or malicious web merchants were
found among those with certain types of trust certificates, then among those
without. While such findings can be attributed to a lack of diligence in trust
authorities, or even to conflicts of interest, our analysis of trust dynamics
suggests that public trust networks would probably remain vulnerable even if
trust authorities were perfectly diligent. The reason is that the process of
trust building, if trust is not breached too often, naturally leads to
power-law distributions: the rich get richer, the trusted attract more trust.
The evolutionary processes with such distributions, ubiquitous in nature, are
known to be robust with respect to random failures, but vulnerable to adaptive
attacks. We recommend some ways to decrease the vulnerability of trust
building, and suggest some ideas for exploration.Comment: 17 pages; simplified the statement and the proof of the main theorem;
FAST 200
A Logic for Constraint-based Security Protocol Analysis
We propose PS-LTL, a pure-past security linear temporal logic that allows the specification of a variety of authentication, secrecy and data freshness properties. Furthermore, we present a sound and complete decision procedure to establish the validity of security properties for symbolic execution traces, and show the integration with constraint-based analysis techniques
A framework for compositional verification of security protocols
Automatic security protocol analysis is currently feasible only for small protocols. Since larger protocols quite often are composed of many small protocols, compositional analysis is an attractive, but non-trivial approach.
We have developed a framework for compositional analysis of a large class of security protocols. The framework is intended to facilitate automatic as well as manual verification of large structured security protocols. Our approach is to verify properties of component protocols in a multi-protocol environment, then deduce properties about the composed protocol. To reduce the complexity of multi-protocol verification, we introduce a notion of protocol independence and prove a number of theorems that enable analysis of independent component protocols in isolation.
To illustrate the applicability of our framework to real-world protocols, we study a key establishment sequence in WiMAX consisting of three subprotocols. Except for a small amount of trivial reasoning, the analysis is done using automatic tools
Extending the Strand Space Method with Timestamps: Part I the Theor
In this paper, we present two extensions of the strand space method to model Kerberos V. First, we include time and timestamps to model security protocols with timestamps: we relate a key to a crack time and com-bine it with timestamps in order to define a notion of recency. Therefore, we can check replay attacks in this new framework. Second, we extend the classic strand space theory to model protocol mixture. The main idea is to introduce a new relation a to model the causal relation between one primary protocol session and one of its following secondary protocol session. Accordingly, we also extend the definition of unsolicited authen-tication test
Automatic Verification of Correspondences for Security Protocols
We present a new technique for verifying correspondences in security
protocols. In particular, correspondences can be used to formalize
authentication. Our technique is fully automatic, it can handle an unbounded
number of sessions of the protocol, and it is efficient in practice. It
significantly extends a previous technique for the verification of secrecy. The
protocol is represented in an extension of the pi calculus with fairly
arbitrary cryptographic primitives. This protocol representation includes the
specification of the correspondence to be verified, but no other annotation.
This representation is then translated into an abstract representation by Horn
clauses, which is used to prove the desired correspondence. Our technique has
been proved correct and implemented. We have tested it on various protocols
from the literature. The experimental results show that these protocols can be
verified by our technique in less than 1 s.Comment: 95 page
A novel risk assessment and optimisation model for a multi-objective network security countermeasure selection problem
Budget cuts and the high demand in strengthening the security of computer systems and services constitute a challenge. Poor system knowledge and inappropriate selection of security measures may lead to unexpected
financial and data losses. This paper proposes a novel Risk Assessment and Optimisation Model (RAOM) to solve a security countermeasure selection problem, where variables such as financial cost and risk may affect a final decision. A Multi-Objective Tabu Search (MOTS) algorithm has been developed to construct an efficient frontier of non-dominated solutions, which can satisfy organisational security needs in a cost-effective
manner
- ā¦