60 research outputs found
Cryptographic security of quantum key distribution
This work is intended as an introduction to cryptographic security and a
motivation for the widely used Quantum Key Distribution (QKD) security
definition. We review the notion of security necessary for a protocol to be
usable in a larger cryptographic context, i.e., for it to remain secure when
composed with other secure protocols. We then derive the corresponding security
criterion for QKD. We provide several examples of QKD composed in sequence and
parallel with different cryptographic schemes to illustrate how the error of a
composed protocol is the sum of the errors of the individual protocols. We also
discuss the operational interpretations of the distance metric used to quantify
these errors.Comment: 31+23 pages. 28 figures. Comments and questions welcom
Strong key derivation from noisy sources
A shared cryptographic key enables strong authentication. Candidate sources for creating such a shared key include biometrics and physically unclonable functions. However, these sources come with a substantial problem: noise in repeated readings.
A fuzzy extractor produces a stable key from a noisy source. It consists of two stages. At enrollment time, the generate algorithm produces a key from an initial reading of the source. At authentication time, the reproduce algorithm takes a repeated but noisy reading of the source, yielding the same key when the two readings are close. For many sources of practical importance, traditional fuzzy extractors provide no meaningful security guarantee.
This dissertation improves key derivation from noisy sources. These improvements stem from three observations about traditional fuzzy extractors.
First, the only property of a source that standard fuzzy extractors use is the entropy in the original reading. We observe that additional structural information about the source can facilitate key derivation.
Second, most fuzzy extractors work by first recovering the initial reading from the noisy reading (known as a secure sketch). This approach imposes harsh limitations on the length of the derived key. We observe that it is possible to produce a consistent key without recovering the original reading of the source.
Third, traditional fuzzy extractors provide information-theoretic security. However, security against computationally bounded adversaries is sufficient. We observe fuzzy extractors providing computational security can overcome limitations of traditional approaches.
The above observations are supported by negative results and constructions. As an example, we combine all three observations to construct a fuzzy extractor achieving properties that have eluded prior approaches. The construction remains secure even when the initial enrollment phase is repeated multiple times with noisy readings. Furthermore, for many practical sources, reliability demands that the tolerated noise is larger than the entropy of the original reading. The construction provides security for sources of this type by utilizing additional source structure, producing a consistent key without recovering the original reading, and providing computational security
Theory and applications of hashing: report from Dagstuhl Seminar 17181
This report documents the program and the topics discussed of the 4-day Dagstuhl Seminar 17181 “Theory and Applications of Hashing”, which took place May 1–5, 2017. Four long and eighteen short talks covered a wide and diverse range of topics within the theme of the workshop. The program left sufficient space for informal discussions among the 40 participants
The Security of Lazy Users in Out-of-Band Authentication
Faced with the threats posed by man-in-the-middle attacks, messaging platforms rely on out-of-band\u27\u27 authentication, assuming that users have access to an external channel for authenticating one short value. For example, assuming that users recognizing each other\u27s voice can authenticate a short value, Telegram and WhatApp ask their users to compare -bit and -bit values, respectively. The existing protocols, however, do not take into account the plausible behavior of users who may be lazy\u27\u27 and only compare parts of these values (rather than their entirety).
Motivated by such a security-critical user behavior, we study the security of lazy users in out-of-band authentication. We start by showing that both the protocol implemented by WhatsApp and the statistically-optimal protocol of Naor, Segev and Smith (CRYPTO \u2706) are completely vulnerable to man-in-the-middle attacks when the users consider only a half of the out-of-band authenticated value. In this light, we put forward a framework that captures the behavior and security of lazy users. Our notions of security consider both statistical security and computational security, and for each flavor we derive a lower bound on the tradeoff between the
number of positions that are considered by the lazy users and the adversary\u27s forgery probability.
Within our framework we then provide two authentication protocols. First, in the statistical setting, we present a transformation that converts any out-of-band authentication protocol into one that is secure even when executed by lazy users. Instantiating our transformation with a new refinement of the protocol of Naor et al. results in a protocol whose tradeoff essentially matches our lower bound in the statistical setting. Then, in the computational setting, we show that the computationally-optimal protocol of Vaudenay (CRYPTO \u2705) is secure even when executed by lazy users -- and its tradeoff matches our lower bound in the computational setting
Cryptographic error correction
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Includes bibliographical references (leaves 67-71).It has been said that "cryptography is about concealing information, and coding theory is about revealing it." Despite these apparently conflicting goals, the two fields have common origins and many interesting relationships. In this thesis, we establish new connections between cryptography and coding theory in two ways: first, by applying cryptographic tools to solve classical problems from the theory of error correction; and second, by studying special kinds of codes that are motivated by cryptographic applications. In the first part of this thesis, we consider a model of error correction in which the source of errors is adversarial, but limited to feasible computation. In this model, we construct appealingly simple, general, and efficient cryptographic coding schemes which can recover from much larger error rates than schemes for classical models of adversarial noise. In the second part, we study collusion-secure fingerprinting codes, which are of fundamental importance in cryptographic applications like data watermarking and traitor tracing. We demonstrate tight lower bounds on the lengths of such codes by devising and analyzing a general collusive attack that works for any code.by Christopher Jason Peikert.Ph.D
Recommended from our members
On Resilience to Computable Tampering
Non-malleable codes, introduced by Dziembowski, Pietrzak, and Wichs (ICS 2010), provide a means of encoding information such that if the encoding is tampered with, the result encodes something either identical or completely unrelated. Unlike error-correcting codes (for which the result of tampering must always be identical), non-malleable codes give guarantees even when tampering functions are allowed to change every symbol of a codeword.
In this thesis, we will provide constructions of non-malleable codes secure against a variety tampering classes with natural computational semantics:
• Bounded-Communication: Functions corresponding to 2-party protocols where each party receives half the input (respectively) and then may communicate </4 bits before returning their (respective) half of the tampered output.
•Local Functions (Juntas):} each tampered output bit is only a function of n¹-ẟ inputs bits, where ẟ>0 is any constant (the efficiency of our code depends on ẟ). This class includes NC⁰.
•Decision Trees: each tampered output bit is a function of n¹/⁴-⁰(¹) adaptively chosen bits.
•Small-Depth Circuits: each tampered output bit is produced by a log(n)/log log(n)-depth circuit of polynomial size, for some constant . This class includes AC⁰.
•Low Degree Polynomials: each tampered output field element is produced by a low-degree (relative to the field size) polynomial.
•Polynomial-Size Circuit Tampering: each tampered codeword is produced by circuit of size ᶜ where is any constant (the efficiency of our code depends on ). This result assumes that E is hard for exponential size nondeterministic circuits (all other results are unconditional).
We stress that our constructions are efficient (encoding and decoding can be performed in uniform polynomial time) and (with the exception of the last result, which assumes strong circuit lower bounds) enjoy unconditional, statistical security guarantees. We also illuminate some potential barriers to constructing codes for more complex computational classes from simpler assumptions
Multi-Instance Randomness Extraction and Security against Bounded-Storage Mass Surveillance
Consider a state-level adversary who observes and stores large amounts of encrypted data from all users on the Internet, but does not have the capacity to store it all. Later, it may target certain persons of interest in order to obtain their decryption keys. We would like to guarantee that, if the adversary\u27s storage capacity is only (say) of the total encrypted data size, then even if it can later obtain the decryption keys of arbitrary users, it can only learn something about the contents of (roughly) of the ciphertexts, while the rest will maintain full security. This can be seen as an extension of incompressible cryptography (Dziembowski CRYPTO \u2706, Guan, Wichs and Zhandry EUROCRYPT \u2722) to the multi-user setting. We provide solutions in both the symmetric key and public key setting with various trade-offs in terms of computational assumptions and efficiency.
As the core technical tool, we study an information-theoretic problem which we refer to as multi-instance randomness extraction . Suppose are correlated random variables whose total joint min-entropy rate is , but we know nothing else about their individual entropies. We choose random and independent seeds and attempt to individually extract some small amount of randomness from each . We\u27d like to say that roughly an -fraction of the extracted outputs should be indistinguishable from uniform even given all the remaining extracted outputs and all the seeds. We show that this indeed holds for specific extractors based on Hadamard and Reed-Muller codes
Incremental Program Obfuscation
Recent advances in program obfuscation suggest that it is possible to create
software that can provably safeguard secret information. However, software
systems usually contain large executable code that is updated multiple times
and sometimes very frequently. Freshly obfuscating the program for every small
update will lead to a considerable efficiency loss. Thus, an extremely
desirable property for obfuscation algorithms is incrementality: small changes
to the underlying program translate into small changes to the corresponding
obfuscated program.
We initiate a thorough investigation of incremental program obfuscation. We
show that the strong simulation-based notions of program obfuscation, such as
``virtual black-box\u27\u27 and ``virtual grey-box\u27\u27 obfuscation, cannot be
incremental (according to our efficiency requirements) even for very simple
functions such as point functions. We then turn to the
indistinguishability-based notions, and present two security definitions of
varying strength --- namely, a weak one and a strong one. To understand the
overall strength of our definitions, we formulate the notion of incremental
best-possible obfuscation and show that it is equivalent to our strong
indistinguishability-based notion.
Finally, we present constructions for incremental program obfuscation
satisfying both our security notions. We first give a construction achieving
the weaker security notion based on the existence of general purpose
indistinguishability obfuscation. Next, we present a generic transformation
using oblivious RAM to amplify security from weaker to stronger, while
maintaining the incrementality property
- …