7 research outputs found
CROO: A universal infrastructure and protocol to detect identity fraud
Identity fraud (IDF) may be defined as unauthorized exploitation of credential information through the use of false identity. We propose CROO, a universal (i.e. generic) infrastructure and protocol to either prevent IDF (by detecting attempts thereof), or limit its consequences (by identifying cases of previously undetected IDF). CROO is a capture resilient one-time password scheme, whereby each user must carry a personal trusted device used to generate one-time passwords (OTPs) verified by online trusted parties. Multiple trusted parties may be used for increased scalability. OTPs can be used regardless of a transaction’s purpose (e.g. user authentication or financial payment), associated credentials, and online or on-site nature; this makes CROO a universal scheme. OTPs are not sent in cleartext; they are used as keys to compute MACs of hashed transaction information, in a manner allowing OTP-verifying parties to confirm that given user credentials (i.e. OTP-keyed MACs) correspond to claimed hashed transaction details. Hashing transaction details increases user privacy. Each OTP is generated from a PIN-encrypted non-verifiable key; this makes users’ devices resilient to off-line PIN-guessing attacks. CROO’s credentials can be formatted as existing user credentials (e.g. credit cards or driver’s licenses)
Enhancing Privacy in Cryptographic Protocols
For the past three decades, a wide variety of cryptographic protocols have been proposed to solve secure communication problems even in the presence of adversaries. The range of this work varies from developing basic security primitives providing confidentiality and authenticity to solving more complex, application-specific problems. However, when these protocols are deployed in practice, a significant challenge is to ensure not just security but also privacy throughout these protocols' lifetime. As computer-based devices are more widely used and the Internet is more globally accessible, new types of applications and new types of privacy threats are being introduced. In addition, user privacy (or equivalently, key privacy) is more likely to be jeopardized in large-scale distributed applications because the absence of a central authority complicates control over these applications.
In this dissertation, we consider three relevant cryptographic protocols facing user privacy threats when deployed in practice. First, we consider matchmaking protocols among strangers to enhance their privacy by introducing the "durability" and "perfect forward privacy" properties. Second, we illustrate the fragility of formal definitions with respect to password privacy in the context of password-based authenticated key exchange (PAKE). In particular, we show that PAKE protocols provably meeting the existing formal definitions do not achieve the expected level of password privacy when deployed in the real world. We propose a new definition for PAKE that is tightly connected to what is actually desired in practice and suggest guidelines for realizing this definition. Finally, we answer to a specific privacy question, namely whether privacy properties of symmetric-key encryption schemes obtained by non-tight reduction proofs are retained in the real world. In particular, we use the privacy notion of "multi-key hiding" property and show its non-tight relation with the IND-CPA symmetric-key encryption schemes with high probability in practice. Finally, we identify schemes that satisfy the "multi-key hiding" and enhance key privacy in the real world
Recommended from our members
Trusting in computer systems
We need to be able to reason about large systems, and not just about their components. For this we would like to have conceptual tools that will help us to understand the behaviour of these systems, and to help us make sense of other, possibly conflicting, views.
In this dissertation we have sought to indicate the need for a new methodology that will allow us to better identify and understand those areas of possible conflict or lack of knowledge, and we have looked for ways to improve the design of computer-based systems in a practical manner that can be readily understood and applied.
In particular, we have taken the concept of trust and how this can help us understand some of the basic security aspects of a system. We have paid particular attention to the nature and type of assumptions that are made both within and between computer systems when they seek to communicate with each other.
The work contained in this dissertation has been motivated by a belief that the design and implementation of many computer-based systems in operation today do not meet the needs of users and operators; and by a strong desire to identify ways in which the design and engineering of such systems can be improved.
We note that many assumptions are frequently made on a de facto basis and which are frequently not acknowledged or even recognised for what they are. We show that an incomplete understanding of what is being assumed, relied upon and trusted can lead to an inadequate understanding of true vulnerabilities of systems. We examine various trust aspects of systems and introduce a definition of trust that we believe can help towards a greater understanding of system weaknesses.
We propose that systems are examined in a manner that analyses the conditions under which it has been designed to perform, examines the circumstances under which it has been implemented, and then compares the two. We believe such an approach to be essential since we have (sadly) seldom found in our experience the two situations to be the same. It is unfortunately all too common to find the application of a design for one context being inappropriately implemented in another. We are proposing that anyone planning the design of a system or part of a system should look at it from the point of view of each of the participants, and that this should include all of the components - including users and implementers to see what they are relying on and to make sure that these assumptions are compatible.
We look at this problem from the approach of what is being trusted in a system, or what a system is being trusted for. We start from some approaches developed in a (military) security context and in widespread use in commercial distributed systems, and demonstrate how the inappropriate application of this concept can lead to unanticipated risks to the system.
We show how the usual use of trust as a system property can restrict the ability to reason about the security properties of a system; and we introduce a new notion of trust that we show is more fruitful for the analysis of the risk characteristics of systems. In particular, we show how, in contrast, our approach can be applied to the analysis of subsystems and systems components.
We propose that trust be considered a "relative" concept, in contrast to the more usual usage, and that it is not the result of knowledge but a substitute for it. We show that although the concepts arose in a security domain, they are equally applicable to the analysis of assumption and risk throughout a system and its components. In contrast to the standard use of trust as a property of a system, our notion of trust applies only within the context of a specific viewpoint from which to judge risks. We argue that it is only after the introduction of a specific context from which trust is to be judged, that we can understand many of the intrinsic vulnerabilities of a distributed system.
We have introduced the concept of there being more than one viewpoint from which to describe the behaviour of a system, and therefore the trust relationships that pertain. The utility of this concept lies in its ability to enable the nature of the risks associated with a specific participant to be measured, whether these are explicitly recognised and accepted by them, or not.
We propose a distinction between trust and trustworthy, and demonstrate that most current uses of the term trust are more appropriately to be viewed as statements of trustworthiness. In particular we propose that trust is more properly understood and used as a substitute for knowledge; rather than the traditional "Orange Book" [DOD85] concept of it being the result of knowledge; where something is trusted if it exists within the security boundary of the system, and can violate the security policy of the system.Digitisation of this thesis was sponsored by Arcadia Fund, a charitable fund of Lisbet Rausing and Peter Baldwin
On all-or-nothing transforms and password-authenticated key exchange protocols
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.Includes bibliographical references (p. 142-152).by Victor Boyko.Ph.D