69 research outputs found

    On the Cryptographic Hardness of Local Search

    Get PDF
    We show new hardness results for the class of Polynomial Local Search problems (PLS): - Hardness of PLS based on a falsifiable assumption on bilinear groups introduced by Kalai, Paneth, and Yang (STOC 2019), and the Exponential Time Hypothesis for randomized algorithms. Previous standard model constructions relied on non-falsifiable and non-standard assumptions. - Hardness of PLS relative to random oracles. The construction is essentially different than previous constructions, and in particular is unconditionally secure. The construction also demonstrates the hardness of parallelizing local search. The core observation behind the results is that the unique proofs property of incrementally-verifiable computations previously used to demonstrate hardness in PLS can be traded with a simple incremental completeness property

    On Oblivious Amplification of Coin-Tossing Protocols

    Get PDF
    We consider the problem of amplifying two-party coin-tossing protocols: given a protocol where it is possible to bias the common output by at most ?, we aim to obtain a new protocol where the output can be biased by at most ?* < ?. We rule out the existence of a natural type of amplifiers called oblivious amplifiers for every ?* < ?. Such amplifiers ignore the way that the underlying ?-bias protocol works and can only invoke an oracle that provides ?-bias bits. We provide two proofs of this impossibility. The first is by a reduction to the impossibility of deterministic randomness extraction from Santha-Vazirani sources. The second is a direct proof that is more general and also rules outs certain types of asymmetric amplification. In addition, it gives yet another proof for the Santha-Vazirani impossibility

    Bootstrapping Homomorphic Encryption via Functional Encryption

    Get PDF
    Homomorphic encryption is a central object in modern cryptography, with far-reaching applications. Constructions supporting homomorphic evaluation of arbitrary Boolean circuits have been known for over a decade, based on standard lattice assumptions. However, these constructions are leveled, meaning that they only support circuits up to some a-priori bounded depth. These leveled constructions can be bootstrapped into fully homomorphic ones, but this requires additional circular security assumptions, which are construction-dependent, and where reductions to standard lattice assumptions are no longer known. Alternative constructions are known based on indistinguishability obfuscation, which has been recently constructed under standard assumptions. However, this alternative requires subexponential hardness of the underlying primitives. We prove a new bootstrapping theorem based on functional encryption, which is known based on standard polynomial hardness assumptions. As a result we obtain the first fully homomorphic encryption scheme that avoids both circular security assumptions and super-polynomial hardness assumptions. The construction is secure against uniform adversaries, and can be made non-uniformly secure assuming a generalization of the time-hierarchy theorem, which follows for example from non-uniform ETH. At the heart of the construction is a new proof technique based on cryptographic puzzles and decomposable obfuscation. Unlike most cryptographic reductions, our security reduction does not fully treat the adversary as a black box, but rather makes explicit use of its running time (or circuit size)

    A Note on Perfect Correctness by Derandomization

    Get PDF
    In this note, we show how to transform a large class of erroneous cryptographic schemes into perfectly correct ones. The transformation works for schemes that are correct on every input with probability noticeably larger than half, and are secure under parallel repetition. We assume the existence of one-way functions and of functions with deterministic (uniform) time complexity 2O(n)2^{O(n)} and non-deterministic circuit complexity 2Ω(n)2^{\Omega(n)}. The transformation complements previous results showing that public-key encryption and indistinguishability obfuscation that err on a noticeable fraction of inputs can be turned into ones that are often correct {\em for all inputs}. The technique relies on the idea of ``reverse randomization\u27\u27 [Naor, Crypto 1989] and on Nisan-Wigderson style derandomization, which was previously used in cryptography to obtain non-interactive witness-indistinguishable proofs and commitment schemes [Barak, Ong and Vadhan, Crypto 2003]

    Classical Binding for Quantum Commitments

    Get PDF
    In classical commitments, statistical binding means that for almost any commitment transcript there is at most one possible opening. While quantum commitments (for classical messages) sometimes have benefits over their classical counterparts (e.g.\ in terms of assumptions), they provide a weaker notion of binding. Essentially that the sender cannot open a given commitment to a random value with probability noticeably greater than 1/21/2. We introduce a notion of classical binding for quantum commitments which provides guarantees analogous to the classical case. In our notion, the receiver performs a (partial) measurement of the quantum commitment string, and the outcome of this measurement determines a single value that the sender may open. We expect that our notion can replace classical commitments in various settings, leaving the security proof essentially unchanged. As an example we show a soundness proof for the GMW zero-knowledge proof system. We construct a non-interactive quantum commitment scheme which is classically statistically-binding and has a classical opening, based on the existence of any post-quantum one-way function. Prior candidates had inherently quantum openings and were not classically binding. In contrast, we show that it is impossible to achieve classical binding for statistically hiding commitments, regardless of assumption or round complexity. Our scheme is simply Naor\u27s commitment scheme (which classically requires a common random string, CRS), but executed in superposition over all possible values of the CRS, and repeated several times. We hope that this technique for using quantum communication to remove a CRS may find other uses

    Robust Additive Randomized Encodings from IO and Pseudo-Non-linear Codes

    Get PDF
    Additive randomized encodings (ARE), introduced by Halevi, Ishai, Kushilevitz, and Rabin (CRYPTO 2023), reduce the computation of a k-party function f(x1,...,xk)f (x_1, . . . , x_k ) to locally computing encodings x^i\hat{x}_i of each input xi and then adding them together over some Abelian group into an output encoding y^=∑x^i\hat{y} = ∑ \hat{x}_i, which reveals nothing but the result. In robust ARE (RARE) the sum of any subset of x^i\hat{x}_i, reveals only the residual function obtained by restricting the corresponding inputs. The appeal of (R)ARE comes from the simplicity of the online part of the computation involving only addition, which yields for instance non-interactive multi-party computation in the shuffle model where messages from different parties are anonymously shuffled. Halevi, Ishai, Kushilevitz, and Rabin constructed ARE from standard assumptions and RARE in the ideal obfuscation model, leaving open the question of whether RARE can be constructed in the plain model. We construct RARE in the plain model from indistinguishability obfuscation, which is necessary, and a new primitive that we call pseudo-non-linear codes. We provide two constructions of this primitive assuming either Learning with Errors or Decision Diffie Hellman. A bonus feature of our construction is that it is online succinct. Specifically, encodings x^i\hat{x}_i can be decomposed to offline parts z^i\hat{z}_i that can be sent directly to the evaluator and short online parts g^i\hat{g}_i that are added together

    On Strong Simulation and Composable Point Obfuscation

    Get PDF
    The Virtual Black Box (VBB) property for program obfuscators provides a strong guarantee: Anything computable by an efficient adversary given the obfuscated program can also be computed by an efficient simulator with only oracle access to the program. However, we know how to achieve this notion only for very restricted classes of programs. This work studies a simple relaxation of VBB: Allow the simulator unbounded computation time, while still allowing only polynomially many queries to the oracle. We then demonstrate the viability of this relaxed notion, which we call Virtual Grey Box (VGB), in the context of fully composable obfuscators for point programs: It is known that, w.r.t. VBB, if such obfuscators exist then there exist multi-bit point obfuscators (aka ``digital lockers\u27\u27) and subsequently also very strong variants of encryption that are resilient to various attacks, such as key leakage and key-dependent-messages. However, no composable VBB-obfuscators for point programs have been shown. We show fully composable {\em VGB}-obfuscators for point programs under a strong variant of the Decision Diffie Hellman assumption. We show they suffice for the above applications and even for extensions to the public key setting as well as for encryption schemes with resistance to certain related key attacks (RKA)

    Indistinguishability Obfuscation from Functional Encryption

    Get PDF
    Indistinguishability obfuscation (IO) is a tremendous notion, powerful enough to give rise to almost any known cryptographic object. Prior candidate IO constructions were based on specific assumptions on algebraic objects called multi-linear graded encodings. We present a generic construction of indistinguishability obfuscation from public-key functional encryption with succinct encryption circuits and subexponential security. This shows the equivalence of indistinguishability obfuscation and public-key functional en- cryption, a primitive that has previously seemed to be much weaker, lacking the power and the staggering range of applications of indistinguishability obfuscation. Our main construction can be based on functional encryption schemes that support a single functional key, and where the encryption circuit grows sub-linearly in the circuit-size of the function. We further show that sublinear succinctness in circuit-size for single-key schemes can be traded with sublinear succinctness in the number of keys (also known as the collusion-size) for multi-key schemes. We also show that, under the Learning with Errors assumption, our techniques imply that any indistinguishability obfuscator can be converted into one where the size of obfuscated circuits is twice that of the original circuit plus an additive overhead that is polynomial in its depth, input length, and the security parameter

    Statistically Sender-Private OT from LPN and Derandomization

    Get PDF
    We construct a two-message oblivious transfer protocol with statistical sender privacy (SSP OT) based on the Learning Parity with Noise (LPN) Assumption and a standard Nisan-Wigderson style derandomization assumption. Beyond being of interest on their own, SSP OT protocols have proven to be a powerful tool toward minimizing the round complexity in a wide array of cryptographic applications from proofs systems, through secure computation protocols, to hard problems in statistical zero knowledge (SZK). The protocol is plausibly post-quantum secure. The only other constructions with plausible post quantum security are based on the Learning with Errors (LWE) Assumption. Lacking the geometric structure of LWE, our construction and analysis rely on a different set of techniques. Technically, we first construct an SSP OT protocol in the common random string model from LPN alone, and then derandomize the common random string. Most of the technical difficulty lies in the first step. Here we prove a robustness property of the inner product randomness extractor to a certain type of linear splitting attacks. A caveat of our construction is that it relies on the so called low noise regime of LPN. This aligns with our current complexity-theoretic understanding of LPN, which only in the low noise regime is known to imply hardness in SZK

    Amplification of Non-Interactive Zero Knowledge, Revisited

    Get PDF
    In an (α,β)-weak non-interactive zero knowledge (NIZK), the soundness error is at most α and the zero-knowledge error is at most β. Goyal, Jain, and Sahai (CRYPTO 2019) show that if α+β<1 for some constants α,β, then (α,β)-weak NIZK can be turned into fully-secure NIZK, assuming sub-exponentially-secure public-key encryption. We revisit the problem of NIZK amplification: – We amplify NIZK arguments assuming only polynomially-secure public-key encryption, for any constants α+β<1. – We amplify NIZK proofs assuming only one-way functions, for any constants α+β<1. – When the soundness error α is negligible to begin with, we can also amplify NIZK arguments assuming only one-way functions. Our results are based on the hidden-bits paradigm, and can be viewed as a reduction from NIZK amplification to the better understood problem of pseudorandomness amplification
    • …
    corecore