40 research outputs found

    On Extractors and Exposure-Resilient Functions for Sublogarithmic Entropy

    Full text link
    We study deterministic extractors for oblivious bit-fixing sources (a.k.a. resilient functions) and exposure-resilient functions with small min-entropy: of the function's n input bits, k << n bits are uniformly random and unknown to the adversary. We simplify and improve an explicit construction of extractors for bit-fixing sources with sublogarithmic k due to Kamp and Zuckerman (SICOMP 2006), achieving error exponentially small in k rather than polynomially small in k. Our main result is that when k is sublogarithmic in n, the short output length of this construction (O(log k) output bits) is optimal for extractors computable by a large class of space-bounded streaming algorithms. Next, we show that a random function is an extractor for oblivious bit-fixing sources with high probability if and only if k is superlogarithmic in n, suggesting that our main result may apply more generally. In contrast, we show that a random function is a static (resp. adaptive) exposure-resilient function with high probability even if k is as small as a constant (resp. log log n). No explicit exposure-resilient functions achieving these parameters are known

    Revisiting Shared Data Protection Against Key Exposure

    Full text link
    This paper puts a new light on secure data storage inside distributed systems. Specifically, it revisits computational secret sharing in a situation where the encryption key is exposed to an attacker. It comes with several contributions: First, it defines a security model for encryption schemes, where we ask for additional resilience against exposure of the encryption key. Precisely we ask for (1) indistinguishability of plaintexts under full ciphertext knowledge, (2) indistinguishability for an adversary who learns: the encryption key, plus all but one share of the ciphertext. (2) relaxes the "all-or-nothing" property to a more realistic setting, where the ciphertext is transformed into a number of shares, such that the adversary can't access one of them. (1) asks that, unless the user's key is disclosed, noone else than the user can retrieve information about the plaintext. Second, it introduces a new computationally secure encryption-then-sharing scheme, that protects the data in the previously defined attacker model. It consists in data encryption followed by a linear transformation of the ciphertext, then its fragmentation into shares, along with secret sharing of the randomness used for encryption. The computational overhead in addition to data encryption is reduced by half with respect to state of the art. Third, it provides for the first time cryptographic proofs in this context of key exposure. It emphasizes that the security of our scheme relies only on a simple cryptanalysis resilience assumption for blockciphers in public key mode: indistinguishability from random, of the sequence of diferentials of a random value. Fourth, it provides an alternative scheme relying on the more theoretical random permutation model. It consists in encrypting with sponge functions in duplex mode then, as before, secret-sharing the randomness

    On Security Properties of All-or-nothing Transforms

    Get PDF
    All-or-nothing transforms have been defined as bijective mappings on all s-tuples over a specified finite alphabet. These mappings are required to satisfy certain "perfect security" conditions specified using entropies of the probability distribution defined on the input s-tuples. Alternatively, purely combinatorial definitions of AONTs have been given, which involve certain kinds of "unbiased arrays". However, the combinatorial definition makes no reference to probability definitions. In this paper, we examine the security provided by AONTs that satisfy the combinatorial definition. The security of the AONT can depend on the underlying probability distribution of the s-tuples. We show that perfect security is obtained from an AONT if and only if the input s-tuples are equiprobable. However, in the case where the input s-tuples are not equiprobable, we still achieve a weaker security guarantee. We also consider the use of randomized AONTs to provide perfect security for a smaller number of inputs, even when those inputs are not equiprobable

    Polynomial-Time, Semantically-Secure Encryption Achieving the Secrecy Capacity

    Get PDF
    In the wiretap channel setting, one aims to get information-theoretic privacy of communicated data based only on the assumption that the channel from sender to receiver is noisier than the one from sender to adversary. The secrecy capacity is the optimal (highest possible) rate of a secure scheme, and the existence of schemes achieving it has been shown. For thirty years the ultimate and unreached goal has been to achieve this optimal rate with a scheme that is polynomial-time. (This means both encryption and decryption are proven polynomial time algorithms.) This paper finally delivers such a scheme. In fact it does more. Our scheme not only meets the classical notion of security from the wiretap literature, called MIS-R (mutual information security for random messages) but achieves the strictly stronger notion of semantic security, thus delivering more in terms of security without loss of rate

    Leakage-Resilient Cryptography

    Get PDF
    We construct a stream-cipher SC whose \emph{implementation} is secure even if arbitrary (adversely chosen) information on the internal state of SC is leaked during computation. This captures \emph{all} possible side-channel attacks on SC where the amount of information leaked in a given period is bounded, but overall cankbe arbitrary large, in particular much larger than the internalkstate of SC. The only other assumption we make on the \emph{implementation} of SC is that only data that is accessedkduring computation leaks information. The construction can be based on any pseudorandom generator, and the only computational assumption we make is that this PRG is secure against non-uniform adversaries in the classical sense (i.e. when there are no side-channels). The stream-cipher SC generates its output in chunks K1,K2,…K_1,K_2,\ldots, and arbitrary but bounded information leakage is modeled by allowing the adversary to adaptively chose a function fℓ:{0,1}∗→{0,1}λf_\ell:\{0,1\}^*\rightarrow\{0,1\}^\lambda before KℓK_\ell is computed, she then gets fℓ(τℓ)f_\ell(\tau_\ell) where τℓ\tau_\ell is the internal state of \SC that is accessed during the computation of KℓK_\ell. One notion of security we prove for \SC is that KℓK_\ell is indistinguishable from random when given K1,…,Kℓ−1K_1,\ldots,K_{\ell-1}, f1(τ1),…,fℓ−1(τℓ−1)f_1(\tau_1),\ldots, f_{\ell-1}(\tau_{\ell-1}) and also the complete internal state of SC after Kℓ+1K_{\ell+1} has been computed (i.e. our cipher is forward-secure). The construction is based on alternating extraction (previously used in the intrusion-resilient secret-sharing scheme from FOCS'07). We move this concept to the computational setting by proving a lemma that states that the output of any PRG has high HILL pseudoentropy (i.e. is indistinguishable from some distribution with high min-entropy) even if arbitrary information about the seed is leaked. The amount of leakage \leak that we can tolerate in each step depends on the strength of the underlying PRG, it is at least logarithmic, but can be as large as a constant fraction of the internal state of SC if the PRG is exponentially hard

    Multitask Efficiencies in the Decision Tree Model

    Get PDF
    In Direct Sum problems [KRW], one tries to show that for a given computational model, the complexity of computing a collection of finite functions on independent inputs is approximately the sum of their individual complexities. In this paper, by contrast, we study the diversity of ways in which the joint computational complexity can behave when all the functions are evaluated on a common input. We focus on the deterministic decision tree model, with depth as the complexity measure; in this model we prove a result to the effect that the 'obvious' constraints on joint computational complexity are essentially the only ones. The proof uses an intriguing new type of cryptographic data structure called a `mystery bin' which we construct using a small polynomial separation between deterministic and unambiguous query complexity shown by Savicky. We also pose a variant of the Direct Sum Conjecture of [KRW] which, if proved for a single family of functions, could yield an analogous result for models such as the communication model.Comment: Improved exposition based on conference versio

    Multitask Efficiencies in the Decision Tree Model

    Get PDF
    In Direct Sum problems [KRW], one tries to show that for a given computational model, the complexity of computing a collection of finite functions on independent inputs is approximately the sum of their individual complexities. In this paper, by contrast, we study the diversity of ways in which the joint computational complexity can behave when all the functions are evaluated on a common input. We focus on the deterministic decision tree model, with depth as the complexity measure; in this model we prove a result to the effect that the 'obvious' constraints on joint computational complexity are essentially the only ones. The proof uses an intriguing new type of cryptographic data structure called a `mystery bin' which we construct using a small polynomial separation between deterministic and unambiguous query complexity shown by Savicky. We also pose a variant of the Direct Sum Conjecture of [KRW] which, if proved for a single family of functions, could yield an analogous result for models such as the communication model.Comment: Improved exposition based on conference versio

    Public-Key Encryption Schemes with Auxiliary Inputs

    Get PDF
    7th Theory of Cryptography Conference, TCC 2010, Zurich, Switzerland, February 9-11, 2010. ProceedingsWe construct public-key cryptosystems that remain secure even when the adversary is given any computationally uninvertible function of the secret key as auxiliary input (even one that may reveal the secret key information-theoretically). Our schemes are based on the decisional Diffie-Hellman (DDH) and the Learning with Errors (LWE) problems. As an independent technical contribution, we extend the Goldreich-Levin theorem to provide a hard-core (pseudorandom) value over large fields.National Science Foundation (U.S.) (Grant CCF-0514167)National Science Foundation (U.S.) (Grant CCF-0635297)National Science Foundation (U.S.) (Grant NSF-0729011)Israel Science Foundation (700/08)Chais Family Fellows Progra
    corecore