60 research outputs found

    Cryptographic security of quantum key distribution

    Full text link
    This work is intended as an introduction to cryptographic security and a motivation for the widely used Quantum Key Distribution (QKD) security definition. We review the notion of security necessary for a protocol to be usable in a larger cryptographic context, i.e., for it to remain secure when composed with other secure protocols. We then derive the corresponding security criterion for QKD. We provide several examples of QKD composed in sequence and parallel with different cryptographic schemes to illustrate how the error of a composed protocol is the sum of the errors of the individual protocols. We also discuss the operational interpretations of the distance metric used to quantify these errors.Comment: 31+23 pages. 28 figures. Comments and questions welcom

    Strong key derivation from noisy sources

    Get PDF
    A shared cryptographic key enables strong authentication. Candidate sources for creating such a shared key include biometrics and physically unclonable functions. However, these sources come with a substantial problem: noise in repeated readings. A fuzzy extractor produces a stable key from a noisy source. It consists of two stages. At enrollment time, the generate algorithm produces a key from an initial reading of the source. At authentication time, the reproduce algorithm takes a repeated but noisy reading of the source, yielding the same key when the two readings are close. For many sources of practical importance, traditional fuzzy extractors provide no meaningful security guarantee. This dissertation improves key derivation from noisy sources. These improvements stem from three observations about traditional fuzzy extractors. First, the only property of a source that standard fuzzy extractors use is the entropy in the original reading. We observe that additional structural information about the source can facilitate key derivation. Second, most fuzzy extractors work by first recovering the initial reading from the noisy reading (known as a secure sketch). This approach imposes harsh limitations on the length of the derived key. We observe that it is possible to produce a consistent key without recovering the original reading of the source. Third, traditional fuzzy extractors provide information-theoretic security. However, security against computationally bounded adversaries is sufficient. We observe fuzzy extractors providing computational security can overcome limitations of traditional approaches. The above observations are supported by negative results and constructions. As an example, we combine all three observations to construct a fuzzy extractor achieving properties that have eluded prior approaches. The construction remains secure even when the initial enrollment phase is repeated multiple times with noisy readings. Furthermore, for many practical sources, reliability demands that the tolerated noise is larger than the entropy of the original reading. The construction provides security for sources of this type by utilizing additional source structure, producing a consistent key without recovering the original reading, and providing computational security

    Theory and applications of hashing: report from Dagstuhl Seminar 17181

    Get PDF
    This report documents the program and the topics discussed of the 4-day Dagstuhl Seminar 17181 “Theory and Applications of Hashing”, which took place May 1–5, 2017. Four long and eighteen short talks covered a wide and diverse range of topics within the theme of the workshop. The program left sufficient space for informal discussions among the 40 participants

    The Security of Lazy Users in Out-of-Band Authentication

    Get PDF
    Faced with the threats posed by man-in-the-middle attacks, messaging platforms rely on out-of-band\u27\u27 authentication, assuming that users have access to an external channel for authenticating one short value. For example, assuming that users recognizing each other\u27s voice can authenticate a short value, Telegram and WhatApp ask their users to compare 288288-bit and 200200-bit values, respectively. The existing protocols, however, do not take into account the plausible behavior of users who may be lazy\u27\u27 and only compare parts of these values (rather than their entirety). Motivated by such a security-critical user behavior, we study the security of lazy users in out-of-band authentication. We start by showing that both the protocol implemented by WhatsApp and the statistically-optimal protocol of Naor, Segev and Smith (CRYPTO \u2706) are completely vulnerable to man-in-the-middle attacks when the users consider only a half of the out-of-band authenticated value. In this light, we put forward a framework that captures the behavior and security of lazy users. Our notions of security consider both statistical security and computational security, and for each flavor we derive a lower bound on the tradeoff between the number of positions that are considered by the lazy users and the adversary\u27s forgery probability. Within our framework we then provide two authentication protocols. First, in the statistical setting, we present a transformation that converts any out-of-band authentication protocol into one that is secure even when executed by lazy users. Instantiating our transformation with a new refinement of the protocol of Naor et al. results in a protocol whose tradeoff essentially matches our lower bound in the statistical setting. Then, in the computational setting, we show that the computationally-optimal protocol of Vaudenay (CRYPTO \u2705) is secure even when executed by lazy users -- and its tradeoff matches our lower bound in the computational setting

    Cryptographic error correction

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Includes bibliographical references (leaves 67-71).It has been said that "cryptography is about concealing information, and coding theory is about revealing it." Despite these apparently conflicting goals, the two fields have common origins and many interesting relationships. In this thesis, we establish new connections between cryptography and coding theory in two ways: first, by applying cryptographic tools to solve classical problems from the theory of error correction; and second, by studying special kinds of codes that are motivated by cryptographic applications. In the first part of this thesis, we consider a model of error correction in which the source of errors is adversarial, but limited to feasible computation. In this model, we construct appealingly simple, general, and efficient cryptographic coding schemes which can recover from much larger error rates than schemes for classical models of adversarial noise. In the second part, we study collusion-secure fingerprinting codes, which are of fundamental importance in cryptographic applications like data watermarking and traitor tracing. We demonstrate tight lower bounds on the lengths of such codes by devising and analyzing a general collusive attack that works for any code.by Christopher Jason Peikert.Ph.D

    Multi-Instance Randomness Extraction and Security against Bounded-Storage Mass Surveillance

    Get PDF
    Consider a state-level adversary who observes and stores large amounts of encrypted data from all users on the Internet, but does not have the capacity to store it all. Later, it may target certain persons of interest in order to obtain their decryption keys. We would like to guarantee that, if the adversary\u27s storage capacity is only (say) 1%1\% of the total encrypted data size, then even if it can later obtain the decryption keys of arbitrary users, it can only learn something about the contents of (roughly) 1%1\% of the ciphertexts, while the rest will maintain full security. This can be seen as an extension of incompressible cryptography (Dziembowski CRYPTO \u2706, Guan, Wichs and Zhandry EUROCRYPT \u2722) to the multi-user setting. We provide solutions in both the symmetric key and public key setting with various trade-offs in terms of computational assumptions and efficiency. As the core technical tool, we study an information-theoretic problem which we refer to as multi-instance randomness extraction . Suppose X1,,XtX_1, \ldots, X_t are correlated random variables whose total joint min-entropy rate is α\alpha, but we know nothing else about their individual entropies. We choose tt random and independent seeds S1,,StS_1, \ldots, S_t and attempt to individually extract some small amount of randomness Yi=Ext(Xi;Si)Y_i = \mathsf{Ext}(X_i;S_i) from each XiX_i. We\u27d like to say that roughly an α\alpha-fraction of the extracted outputs YiY_i should be indistinguishable from uniform even given all the remaining extracted outputs and all the seeds. We show that this indeed holds for specific extractors based on Hadamard and Reed-Muller codes

    Incremental Program Obfuscation

    Get PDF
    Recent advances in program obfuscation suggest that it is possible to create software that can provably safeguard secret information. However, software systems usually contain large executable code that is updated multiple times and sometimes very frequently. Freshly obfuscating the program for every small update will lead to a considerable efficiency loss. Thus, an extremely desirable property for obfuscation algorithms is incrementality: small changes to the underlying program translate into small changes to the corresponding obfuscated program. We initiate a thorough investigation of incremental program obfuscation. We show that the strong simulation-based notions of program obfuscation, such as ``virtual black-box\u27\u27 and ``virtual grey-box\u27\u27 obfuscation, cannot be incremental (according to our efficiency requirements) even for very simple functions such as point functions. We then turn to the indistinguishability-based notions, and present two security definitions of varying strength --- namely, a weak one and a strong one. To understand the overall strength of our definitions, we formulate the notion of incremental best-possible obfuscation and show that it is equivalent to our strong indistinguishability-based notion. Finally, we present constructions for incremental program obfuscation satisfying both our security notions. We first give a construction achieving the weaker security notion based on the existence of general purpose indistinguishability obfuscation. Next, we present a generic transformation using oblivious RAM to amplify security from weaker to stronger, while maintaining the incrementality property
    corecore