992 research outputs found

    Limits to Non-Malleability

    Get PDF
    There have been many successes in constructing explicit non-malleable codes for various classes of tampering functions in recent years, and strong existential results are also known. In this work we ask the following question: When can we rule out the existence of a non-malleable code for a tampering class ?? First, we start with some classes where positive results are well-known, and show that when these classes are extended in a natural way, non-malleable codes are no longer possible. Specifically, we show that no non-malleable codes exist for any of the following tampering classes: - Functions that change d/2 symbols, where d is the distance of the code; - Functions where each input symbol affects only a single output symbol; - Functions where each of the n output bits is a function of n-log n input bits. Furthermore, we rule out constructions of non-malleable codes for certain classes ? via reductions to the assumption that a distributional problem is hard for ?, that make black-box use of the tampering functions in the proof. In particular, this yields concrete obstacles for the construction of efficient codes for NC, even assuming average-case variants of P ? NC

    Pseudorandomness for Approximate Counting and Sampling

    Get PDF
    We study computational procedures that use both randomness and nondeterminism. The goal of this paper is to derandomize such procedures under the weakest possible assumptions. Our main technical contribution allows one to “boost” a given hardness assumption: We show that if there is a problem in EXP that cannot be computed by poly-size nondeterministic circuits then there is one which cannot be computed by poly-size circuits that make non-adaptive NP oracle queries. This in particular shows that the various assumptions used over the last few years by several authors to derandomize Arthur-Merlin games (i.e., show AM = NP) are in fact all equivalent. We also define two new primitives that we regard as the natural pseudorandom objects associated with approximate counting and sampling of NP-witnesses. We use the “boosting” theorem and hashing techniques to construct these primitives using an assumption that is no stronger than that used to derandomize AM. We observe that Cai's proof that S_2^P ⊆ PP⊆(NP) and the learning algorithm of Bshouty et al. can be seen as reductions to sampling that are not probabilistic. As a consequence they can be derandomized under an assumption which is weaker than the assumption that was previously known to suffice

    Pseudorandomness and the Minimum Circuit Size Problem

    Get PDF

    Randomness Extraction in AC0 and with Small Locality

    Get PDF
    Randomness extractors, which extract high quality (almost-uniform) random bits from biased random sources, are important objects both in theory and in practice. While there have been significant progress in obtaining near optimal constructions of randomness extractors in various settings, the computational complexity of randomness extractors is still much less studied. In particular, it is not clear whether randomness extractors with good parameters can be computed in several interesting complexity classes that are much weaker than P. In this paper we study randomness extractors in the following two models of computation: (1) constant-depth circuits (AC0), and (2) the local computation model. Previous work in these models, such as [Vio05a], [GVW15] and [BG13], only achieve constructions with weak parameters. In this work we give explicit constructions of randomness extractors with much better parameters. As an application, we use our AC0 extractors to study pseudorandom generators in AC0, and show that we can construct both cryptographic pseudorandom generators (under reasonable computational assumptions) and unconditional pseudorandom generators for space bounded computation with very good parameters. Our constructions combine several previous techniques in randomness extractors, as well as introduce new techniques to reduce or preserve the complexity of extractors, which may be of independent interest. These include (1) a general way to reduce the error of strong seeded extractors while preserving the AC0 property and small locality, and (2) a seeded randomness condenser with small locality.Comment: 62 page

    Using Simon's Algorithm to Attack Symmetric-Key Cryptographic Primitives

    Get PDF
    We present new connections between quantum information and the field of classical cryptography. In particular, we provide examples where Simon's algorithm can be used to show insecurity of commonly used cryptographic symmetric-key primitives. Specifically, these examples consist of a quantum distinguisher for the 3-round Feistel network and a forgery attack on CBC-MAC which forges a tag for a chosen-prefix message querying only other messages (of the same length). We assume that an adversary has quantum-oracle access to the respective classical primitives. Similar results have been achieved recently in independent work by Kaplan et al. Our findings shed new light on the post-quantum security of cryptographic schemes and underline that classical security proofs of cryptographic constructions need to be revisited in light of quantum attackers.Comment: 14 pages, 2 figures. v3: final polished version, more formal definitions adde

    Cloud Data Auditing Using Proofs of Retrievability

    Full text link
    Cloud servers offer data outsourcing facility to their clients. A client outsources her data without having any copy at her end. Therefore, she needs a guarantee that her data are not modified by the server which may be malicious. Data auditing is performed on the outsourced data to resolve this issue. Moreover, the client may want all her data to be stored untampered. In this chapter, we describe proofs of retrievability (POR) that convince the client about the integrity of all her data.Comment: A version has been published as a book chapter in Guide to Security Assurance for Cloud Computing (Springer International Publishing Switzerland 2015

    Bloom Filters in Adversarial Environments

    Get PDF
    Many efficient data structures use randomness, allowing them to improve upon deterministic ones. Usually, their efficiency and correctness are analyzed using probabilistic tools under the assumption that the inputs and queries are independent of the internal randomness of the data structure. In this work, we consider data structures in a more robust model, which we call the adversarial model. Roughly speaking, this model allows an adversary to choose inputs and queries adaptively according to previous responses. Specifically, we consider a data structure known as "Bloom filter" and prove a tight connection between Bloom filters in this model and cryptography. A Bloom filter represents a set SS of elements approximately, by using fewer bits than a precise representation. The price for succinctness is allowing some errors: for any xSx \in S it should always answer `Yes', and for any xSx \notin S it should answer `Yes' only with small probability. In the adversarial model, we consider both efficient adversaries (that run in polynomial time) and computationally unbounded adversaries that are only bounded in the number of queries they can make. For computationally bounded adversaries, we show that non-trivial (memory-wise) Bloom filters exist if and only if one-way functions exist. For unbounded adversaries we show that there exists a Bloom filter for sets of size nn and error ε\varepsilon, that is secure against tt queries and uses only O(nlog1ε+t)O(n \log{\frac{1}{\varepsilon}}+t) bits of memory. In comparison, nlog1εn\log{\frac{1}{\varepsilon}} is the best possible under a non-adaptive adversary

    On One-way Functions and Kolmogorov Complexity

    Get PDF
    We prove that the equivalence of two fundamental problems in the theory of computing. For every polynomial t(n)(1+ε)n,ε>0t(n)\geq (1+\varepsilon)n, \varepsilon>0, the following are equivalent: - One-way functions exists (which in turn is equivalent to the existence of secure private-key encryption schemes, digital signatures, pseudorandom generators, pseudorandom functions, commitment schemes, and more); - tt-time bounded Kolmogorov Complexity, KtK^t, is mildly hard-on-average (i.e., there exists a polynomial p(n)>0p(n)>0 such that no PPT algorithm can compute KtK^t, for more than a 11p(n)1-\frac{1}{p(n)} fraction of nn-bit strings). In doing so, we present the first natural, and well-studied, computational problem characterizing the feasibility of the central private-key primitives and protocols in Cryptography
    corecore