202 research outputs found

    Doubly-Affine Extractors, and Their Applications

    Get PDF
    In this work we challenge the common misconception that information-theoretic (IT) privacy is too impractical to be used in the real-world: we propose to build simple and reusable IT-encryption solutions whose only efficiency penalty (compared to computationally-secure schemes) comes from a large secret key size, which is often a rather minor inconvenience, as storage is cheap. In particular, our solutions are stateless and locally computable at the optimal rate, meaning that honest parties do not maintain state and read only (optimally) small portions of their large keys with every use. Moreover, we also propose a novel architecture for outsourcing the storage of these long keys to a network of semi-trusted servers, trading the need to store large secrets with the assumption that it is hard to simultaneously compromise too many publicly accessible ad-hoc servers. Our architecture supports everlasting privacy and post-application security of the derived one-time keys, resolving two major limitations of a related model for outsourcing key storage, called bounded storage model. Both of these results come from nearly optimal constructions of so called doubly-affine extractors: locally-computable, seeded extractors Ext(X,S) which are linear functions of X (for any fixed seed S), and protect against bounded affine leakage on X. This holds unconditionally, even if (a) affine leakage may adaptively depend on the extracted key R = Ext(X,S); and (b) the seed S is only computationally secure. Neither of these properties are possible with general-leakage extractors

    Beating Shannon requires BOTH efficient adversaries AND non-zero advantage

    Get PDF
    In this note we formally show a folklore (but, to the best of our knowledge, not documented) fact that in order to beat the famous Shannon lower bound on key length for one-time-secure encryption, one must *simultaneously* restrict the attacker to be efficient, and also allow the attacker to break the system with some non-zero (i.e., negligible) probability. Despite being folklore , we were unable to find a clean and simple proof of this result, despite asking several experts in the field. We hope that cryptography instructors will find this note useful when justifying the transition from information-theoretic to computational cryptography. We note that our proof cleanly handles *probabilistic* encryption, as well as a small *decryption error*

    Multiparty Quantum Coin Flipping

    Full text link
    We investigate coin-flipping protocols for multiple parties in a quantum broadcast setting: (1) We propose and motivate a definition for quantum broadcast. Our model of quantum broadcast channel is new. (2) We discovered that quantum broadcast is essentially a combination of pairwise quantum channels and a classical broadcast channel. This is a somewhat surprising conclusion, but helps us in both our lower and upper bounds. (3) We provide tight upper and lower bounds on the optimal bias epsilon of a coin which can be flipped by k parties of which exactly g parties are honest: for any 1 <= g <= k, epsilon = 1/2 - Theta(g/k). Thus, as long as a constant fraction of the players are honest, they can prevent the coin from being fixed with at least a constant probability. This result stands in sharp contrast with the classical setting, where no non-trivial coin-flipping is possible when g <= k/2.Comment: v2: bounds now tight via new protocol; to appear at IEEE Conference on Computational Complexity 200

    Small-Box Cryptography

    Get PDF
    One of the ultimate goals of symmetric-key cryptography is to find a rigorous theoretical framework for building block ciphers from small components, such as cryptographic S-boxes, and then argue why iterating such small components for sufficiently many rounds would yield a secure construction. Unfortunately, a fundamental obstacle towards reaching this goal comes from the fact that traditional security proofs cannot get security beyond 2^{-n}, where n is the size of the corresponding component. As a result, prior provably secure approaches - which we call "big-box cryptography" - always made n larger than the security parameter, which led to several problems: (a) the design was too coarse to really explain practical constructions, as (arguably) the most interesting design choices happening when instantiating such "big-boxes" were completely abstracted out; (b) the theoretically predicted number of rounds for the security of this approach was always dramatically smaller than in reality, where the "big-box" building block could not be made as ideal as required by the proof. For example, Even-Mansour (and, more generally, key-alternating) ciphers completely ignored the substitution-permutation network (SPN) paradigm which is at the heart of most real-world implementations of such ciphers. In this work, we introduce a novel paradigm for justifying the security of existing block ciphers, which we call small-box cryptography. Unlike the "big-box" paradigm, it allows one to go much deeper inside the existing block cipher constructions, by only idealizing a small (and, hence, realistic!) building block of very small size n, such as an 8-to-32-bit S-box. It then introduces a clean and rigorous mixture of proofs and hardness conjectures which allow one to lift traditional, and seemingly meaningless, "at most 2^{-n}" security proofs for reduced-round idealized variants of the existing block ciphers, into meaningful, full-round security justifications of the actual ciphers used in the real world. We then apply our framework to the analysis of SPN ciphers (e.g, generalizations of AES), getting quite reasonable and plausible concrete hardness estimates for the resulting ciphers. We also apply our framework to the design of stream ciphers. Here, however, we focus on the simplicity of the resulting construction, for which we managed to find a direct "big-box"-style security justification, under a well studied and widely believed eXact Linear Parity with Noise (XLPN) assumption. Overall, we hope that our work will initiate many follow-up results in the area of small-box cryptography

    Space-time tradoffs for graph properties

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.Includes bibliographical references (p. 83-84).by Yevgeniy Dodis.M.S

    Randomness Condensers for Efficiently Samplable, Seed-Dependent Sources

    Get PDF
    We initiate a study of randomness condensers for sources that are efficiently samplable but may depend on the seed of the con- denser. That is, we seek functions Cond : {0, 1}n ×{0, 1}d → {0, 1}m such that if we choose a random seed S ← {0,1}d, and a source X = A(S) is generated by a randomized circuit A of size t such that X has min- entropy at least k given S, then Cond(X;S) should have min-entropy at least some k′ given S. The distinction from the standard notion of ran- domness condensers is that the source X may be correlated with the seed S (but is restricted to be efficiently samplable). Randomness extractors of this type (corresponding to the special case where k′ = m) have been implicitly studied in the past (by Trevisan and Vadhan, FOCS ‘00). We show that: – Unlike extractors, we can have randomness condensers for samplable, seed-dependent sources whose computational complexity is smaller than the size t of the adversarial sampling algorithm A. Indeed, we show that sufficiently strong collision-resistant hash functions are seed-dependent condensers that produce outputs with min-entropy k′ = m − O(log t), i.e. logarithmic entropy deficiency. – Randomness condensers suffice for key derivation in many crypto- graphic applications: when an adversary has negligible success proba- bility (or negligible “squared advantage” [3]) for a uniformly random key, we can use instead a key generated by a condenser whose output has logarithmic entropy deficiency. – Randomness condensers for seed-dependent samplable sources that are robust to side information generated by the sampling algorithm imply soundness of the Fiat-Shamir Heuristic when applied to any constant-round, public-coin interactive proof system.Engineering and Applied Science

    Unilaterally-Authenticated Key Exchange

    Get PDF
    Key Exchange (KE), which enables two parties (e.g., a client and a server) to securely establish a common private key while communicating over an insecure channel, is one of the most fundamental cryptographic primitives. In this work, we address the setting of unilaterally-authenticated key exchange (UAKE), where an unauthenticated (unkeyed) client establishes a key with an authenticated (keyed) server. This setting is highly motivated by many practical uses of KE on the Internet, but received relatively little attention so far. Unlike the prior work, defining UAKE by downgrading a relatively complex definition of mutually authenticated key exchange (MAKE), our definition follows the opposite approach of upgrading existing definitions of public key encryption (PKE) and signatures towards UAKE. As a result, our new definition is short and easy to understand. Nevertheless, we show that it is equivalent to the UAKE definition of Bellare-Rogaway (when downgraded from MAKE), and thus captures a very strong and widely adopted security notion, while looking very similar to the simple ``one-oracle\u27\u27 definition of traditional PKE/signature schemes. As a benefit of our intuitive framework, we show two exactly-as-you-expect (i.e., having no caveats so abundant in the KE literature!) UAKE protocols from (possibly interactive) signature and encryption. By plugging various one- or two-round signature and encryption schemes, we derive provably-secure variants of various well-known UAKE protocols (such as a unilateral variant of SKEME with and without perfect forward secrecy, and Shoup\u27s A-DHKE-1), as well as new protocols, such as the first 22-round UAKE protocol which is both (passively) forward deniable and forward-secure. To further clarify the intuitive connections between PKE/Signatures and UAKE, we define and construct stronger forms of (necessarily interactive) PKE/Signature schemes, called confirmed encryption and confidential authentication, which, respectively, allow the sender to obtain confirmation that the (keyed) receiver output the correct message, or to hide the content of the message being authenticated from anybody but the participating (unkeyed) receiver. Using confirmed PKE/confidential authentication, we obtain two concise UAKE protocols of the form: ``send confirmed encryption/confidential authentication of a random key KK.\u27\u2

    Fuzzy Extractors: How to Generate Strong Keys from Biometrics and Other Noisy Data

    Get PDF
    We provide formal definitions and efficient secure techniques for - turning noisy information into keys usable for any cryptographic application, and, in particular, - reliably and securely authenticating biometric data. Our techniques apply not just to biometric information, but to any keying material that, unlike traditional cryptographic keys, is (1) not reproducible precisely and (2) not distributed uniformly. We propose two primitives: a "fuzzy extractor" reliably extracts nearly uniform randomness R from its input; the extraction is error-tolerant in the sense that R will be the same even if the input changes, as long as it remains reasonably close to the original. Thus, R can be used as a key in a cryptographic application. A "secure sketch" produces public information about its input w that does not reveal w, and yet allows exact recovery of w given another value that is close to w. Thus, it can be used to reliably reproduce error-prone biometric inputs without incurring the security risk inherent in storing them. We define the primitives to be both formally secure and versatile, generalizing much prior work. In addition, we provide nearly optimal constructions of both primitives for various measures of ``closeness'' of input data, such as Hamming distance, edit distance, and set difference.Comment: 47 pp., 3 figures. Prelim. version in Eurocrypt 2004, Springer LNCS 3027, pp. 523-540. Differences from version 3: minor edits for grammar, clarity, and typo

    Online Linear Extractors for Independent Sources

    Get PDF
    In this work, we characterize online linear extractors. In other words, given a matrix AF2n×nA \in \mathbb{F}_2^{n \times n}, we study the convergence of the iterated process SASX\mathbf{S} \leftarrow A\mathbf{S} \oplus \mathbf{X} , where XD\mathbf{X} \sim D is repeatedly sampled independently from some fixed (but unknown) distribution DD with (min)-entropy at least kk. Here, we think of S{0,1}n\mathbf{S} \in \{0,1\}^n as the state of an online extractor, and X{0,1}n\mathbf{X} \in \{0,1\}^n as its input. As our main result, we show that the state S\mathbf{S} converges to the uniform distribution for all input distributions DD with entropy k>0k > 0 if and only if the matrix AA has no non-trivial invariant subspace (i.e., a non-zero subspace VF2nV \subsetneq \mathbb{F}_2^n such that AVVAV \subseteq V). In other words, a matrix AA yields an online linear extractor if and only if AA has no non-trivial invariant subspace. For example, the linear transformation corresponding to multiplication by a generator of the field F2n\mathbb{F}_{2^n} yields a good online linear extractor. Furthermore, for any such matrix convergence takes at most O~(n2(k+1)/k2)\widetilde{O}(n^2(k+1)/k^2) steps. We also study the more general notion of condensing---that is, we ask when this process converges to a distribution with entropy at least \ell, when the input distribution has entropy greater than kk. (Extractors corresponding to the special case when =n\ell = n.) We show that a matrix gives a good condenser if there are relatively few vectors wF2n\mathbf{w} \in \mathbb{F}_2^n such that w,ATw,,(AT)nk1w\mathbf{w}, A^T\mathbf{w}, \ldots, (A^T)^{n-k-1} \mathbf{w} are linearly dependent. As an application, we show that the very simple cyclic rotation transformation A(x1,,xn)=(xn,x1,,xn1)A(x_1,\ldots, x_n) = (x_n,x_1,\ldots, x_{n-1}) condenses to =n1\ell = n-1 bits for any k>1k > 1 if nn is a prime satisfying a certain simple number-theoretic condition. Our proofs are Fourier-analytic and rely on a novel lemma, which gives a tight bound on the product of certain Fourier coefficients of any entropic distribution

    Multicast Key Agreement, Revisited

    Get PDF
    Multicast Key Agreement (MKA) is a long-overlooked natural primitive of large practical interest. In traditional MKA, an omniscient group manager privately distributes secrets over an untrusted network to a dynamically-changing set of group members. The group members are thus able to derive shared group secrets across time, with the main security requirement being that only current group members can derive the current group secret. There indeed exist very efficient MKA schemes in the literature that utilize symmetric-key cryptography. However, they lack formal security analyses, efficiency analyses regarding dynamically changing groups, and more modern, robust security guarantees regarding user state leakages: forward secrecy (FS) and post-compromise security (PCS). The former ensures that group secrets prior to state leakage remain secure, while the latter ensures that after such leakages, users can quickly recover security of group secrets via normal protocol operations. More modern Secure Group Messaging (SGM) protocols allow a group of users to asynchronously and securely communicate with each other, as well as add and remove each other from the group. SGM has received significant attention recently, including in an effort by the IETF Messaging Layer Security (MLS) working group to standardize an eponymous protocol. However, the group key agreement primitive at the core of SGM protocols, Continuous Group Key Agreement (CGKA), achieved by the TreeKEM protocol in MLS, suffers from bad worst-case efficiency and heavily relies on less efficient (than symmetric-key cryptography) public-key cryptography. We thus propose that in the special case of a group membership change policy which allows a single member to perform all group additions and removals, an upgraded version of classical Multicast Key Agreement (MKA) may serve as a more efficient substitute for CGKA in SGM. We therefore present rigorous, stronger MKA security definitions that provide increasing levels of security in the case of both user and group manager state leakage, and that are suitable for modern applications, such as SGM. We then construct a formally secure MKA protocol with strong efficiency guarantees for dynamic groups. Finally, we run experiments which show that the left-balanced binary tree structure used in TreeKEM can be replaced with red-black trees in MKA for better efficiency
    corecore