230 research outputs found

    ZK-PCPs from Leakage-Resilient Secret Sharing

    Get PDF
    Zero-Knowledge PCPs (ZK-PCPs; Kilian, Petrank, and Tardos, STOC `97) are PCPs with the additional zero-knowledge guarantee that the view of any (possibly malicious) verifier making a bounded number of queries to the proof can be efficiently simulated up to a small statistical distance. Similarly, ZK-PCPs of Proximity (ZK-PCPPs; Ishai and Weiss, TCC `14) are PCPPs in which the view of an adversarial verifier can be efficiently simulated with few queries to the input. Previous ZK-PCP constructions obtained an exponential gap between the query complexity q of the honest verifier, and the bound q^* on the queries of a malicious verifier (i.e., q = poly log (q^*)), but required either exponential-time simulation, or adaptive honest verification. This should be contrasted with standard PCPs, that can be verified non-adaptively (i.e., with a single round of queries to the proof). The problem of constructing such ZK-PCPs, even when q^* = q, has remained open since they were first introduced more than 2 decades ago. This question is also open for ZK-PCPPs, for which no construction with non-adaptive honest verification is known (not even with exponential-time simulation). We resolve this question by constructing the first ZK-PCPs and ZK-PCPPs which simultaneously achieve efficient zero-knowledge simulation and non-adaptive honest verification. Our schemes have a square-root query gap, namely q^*/q = O(?n) where n is the input length. Our constructions combine the "MPC-in-the-head" technique (Ishai et al., STOC `07) with leakage-resilient secret sharing. Specifically, we use the MPC-in-the-head technique to construct a ZK-PCP variant over a large alphabet, then employ leakage-resilient secret sharing to design a new alphabet reduction for ZK-PCPs which preserves zero-knowledge

    Is there an Oblivious RAM Lower Bound for Online Reads?

    Get PDF
    Oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (JACM 1996), can be used to read and write to memory in a way that hides which locations are being accessed. The best known ORAM schemes have an O(logn)O(\log n) overhead per access, where nn is the data size. The work of Goldreich and Ostrovsky gave a lower bound showing that this is optimal for ORAM schemes that operate in a ``balls and bins\u27\u27 model, where memory blocks can only be shuffled between different locations but not manipulated otherwise. The lower bound even extends to weaker settings such as offline ORAM, where all of the accesses to be performed need to be specified ahead of time, and read-only ORAM, which only allows reads but not writes. But can we get lower bounds for general ORAM, beyond ``balls and bins\u27\u27? The work of Boyle and Naor (ITCS \u2716) shows that this is unlikely in the offline setting. In particular, they construct an offline ORAM with o(logn)o(\log n) overhead assuming the existence of small sorting circuits. Although we do not have instantiations of the latter, ruling them out would require proving new circuit lower bounds. On the other hand, the recent work of Larsen and Nielsen (CRYPTO \u2718) shows that there indeed is an Ω(logn)\Omega(\log n) lower bound for general online ORAM. This still leaves the question open for online read-only ORAM or for read/write ORAM where we want very small overhead for the read operations. In this work, we show that a lower bound in these settings is also unlikely. In particular, our main result is a construction of online ORAM where reads (but not writes) have an o(logn)o(\log n) overhead, assuming the existence of small sorting circuits as well as very good locally decodable codes (LDCs). Although we do not have instantiations of either of these with the required parameters, ruling them out is beyond current lower bounds

    Probabilistically Checkable Proofs of Proximity with Zero-Knowledge

    Get PDF
    A probabilistically Checkable Proof (PCP) allows a randomized verifier, with oracle access to a purported proof, to probabilistically verify an input statement of the form xLx\in L by querying only few bits of the proof. A PCP of proximity (PCPP) has the additional feature of allowing the verifier to query only few bits of the input xx, where if the input is accepted then the verifier is guaranteed that (with high probability) the input is close to some x2˘7Lx\u27\in L. Motivated by their usefulness for sublinear-communication cryptography, we initiate the study of a natural zero-knowledge variant of PCPP (ZKPCPP), where the view of any verifier making a bounded number of queries can be efficiently simulated by making the same number of queries to the input oracle alone. This new notion provides a useful extension of the standard notion of zero-knowledge PCPs. We obtain two types of results. 1. Constructions. We obtain the first constructions of query-efficient ZKPCPPs via a general transformation which combines standard query-efficient PCPPs with protocols for secure multiparty computation. As a byproduct, our construction provides a conceptually simpler alternative to a previous construction of honest-verifier zero-knowledge PCPs due to Dwork et al. (Crypto \u2792). 2. Applications. We motivate the notion of ZKPCPPs by applying it towards sublinear-communication implementations of commit-and-prove functionalities. Concretely, we present the first sublinear-communication commit-and-prove protocols which make a black-box use of a collision-resistant hash function, and the first such multiparty protocols which offer information-theoretic security in the presence of an honest majority

    How to Construct a Leakage-Resilient (Stateless) Trusted Party

    Get PDF
    Trusted parties and devices are commonly used in the real world to securely perform computations on secret inputs. However, their security can often be compromised by side-channel attacks in which the adversary obtains partial leakage on intermediate computation values. This gives rise to the following natural question: To what extent can one protect the trusted party against leakage? Our goal is to design a hardware device TT that allows m1m\ge 1 parties to securely evaluate a function f(x1,,xm)f(x_1,\ldots,x_m) of their inputs by feeding TT with encoded inputs that are obtained using local secret randomness. Security should hold even in the presence of an active adversary that can corrupt a subset of parties and obtain restricted leakage on the internal computations in TT. We design hardware devices TT in this setting both for zero-knowledge proofs and for general multi-party computations. Our constructions can unconditionally resist either AC0AC^0 leakage or a strong form of ``only computation leaks\u27\u27 (OCL) leakage that captures realistic side-channel attacks, providing different tradeoffs between efficiency and security

    Protecting obfuscation against arithmetic attacks

    Get PDF
    Obfuscation, the task of compiling circuits or programs to make the internal computation unintelligible while preserving input/output functionality, has become an object of central focus in the cryptographic community. A work of Garg et al. [FOCS 2013] gave the first candidate obfuscator for general polynomial-size circuits, and led to several other works constructing candidate obfuscators. Each of these constructions is built upon another cryptographic primitive called a multilinear map, or alternatively a graded encoding scheme. Several of these candidates have been shown to achieve the strongest notion of security (virtual black-box, or VBB) against purely algebraic attacks in a model that we call the fully-restricted graded encoding model. In this model, each operation performed by an adversary is required to obey the algebraic restrictions of the graded encoding scheme. These restrictions essentially impose strong forms of homogeneity and multilinearity on the allowed polynomials. While important, the scope of the security proofs is limited by the stringency of these restrictions. We propose and analyze another variant of the Garg et al. obfuscator in a setting that imposes fewer restrictions on the adversary, which we call the arithmetic setting. This setting captures a broader class of attacks than considered in previous works. We also explore connections between notions of obfuscation security and longstanding questions in arithmetic circuit complexity. Our results include the following. (1) In the arithmetic setting where the adversary is limited to creating multilinear, but not necessarily homogenous polynomials, we obtain an unconditional proof of VBB security. This requires a substantially different analysis than previous security proofs. (2) In the arithmetic setting where the adversary can create polynomials of arbitrary degree, we show that a proof of VBB security for any currently available candidate obfuscator would imply VP != VNP. To complement this, we show that a weaker notion of security (indistinguishability obfuscation) can be achieved unconditionally in this setting, and that VBB security can be achieved under a complexity-theoretic assumption related to the Exponential Time Hypothesis

    Your Reputation\u27s Safe with Me: Framing-Free Distributed Zero-Knowledge Proofs

    Get PDF
    Distributed Zero-Knowledge (dZK) proofs, recently introduced by Boneh et al. (CYPTO`19), allow a prover PP to prove NP statements on an input xx which is distributed between kk verifiers V1,,VkV_1,\ldots,V_k, where each ViV_i holds only a piece of xx. As in standard ZK proofs, dZK proofs guarantee Completeness when all parties are honest; Soundness against a malicious prover colluding with tt verifiers; and Zero Knowledge against a subset of tt malicious verifiers, in the sense that they learn nothing about the NP witness and the input pieces of the honest verifiers. Unfortunately, dZK proofs provide no correctness guarantee for an honest prover against a subset of maliciously corrupted verifiers. In particular, such verifiers might be able to ``frame\u27\u27 the prover, causing honest verifiers to reject a true claim. This is a significant limitation, since such scenarios arise naturally in dZK applications, e.g., for proving honest behavior, and such attacks are indeed possible in existing dZKs. We put forth and study the notion of strong completeness for dZKs, guaranteeing that true claims are accepted even when tt verifiers are maliciously corrupted. We then design strongly-complete dZK proofs using the ``MPC-in-the-head\u27\u27 paradigm of Ishai et al. (STOC`07), providing a novel analysis that exploits the unique properties of the distributed setting. To demonstrate the usefulness of strong completeness, we present several applications in which it is instrumental in obtaining security. First, we construct a certifiable version of Verifiable Secret Sharing (VSS), which is a VSS in which the dealer additionally proves that the shared secret satisfies a given NP relation. Our construction withstands a constant fraction of corruptions, whereas a previous construction of Ishat et al. (TCC`14) could only handle kεk^{\varepsilon} corruptions for a small ε<1\varepsilon<1. We also design a reusable version of certifiable VSS that we introduce, in which the dealer can prove an unlimited number of predicates on the same shared secret. Finally, we extend a compiler of Boneh et al. (CRYPTO`19), who used dZKs to transform a class of ``natural\u27\u27 semi-honest protocols in the honest-majority setting into maliciously secure ones with abort. Our compiler uses strongly-complete dZKs to obtain identifiable abort

    Beyond MPC-in-the-Head: Black-Box Constructions of Short Zero-Knowledge Proofs

    Get PDF
    In their seminal work, Ishai, Kushilevitz, Ostrovsky, and Sahai (STOC`07) presented the MPC-in-the-Head paradigm, which shows how to design Zero-Knowledge Proofs (ZKPs) from secure Multi-Party Computation (MPC) protocols. This paradigm has since then revolutionized and modularized the design of efficient ZKP systems, with far-reaching applications beyond ZKPs. However, to the best of our knowledge, all previous instantiations relied on fully-secure MPC protocols, and have not been able to leverage the fact that the paradigm only imposes relatively weak privacy and correctness requirements on the underlying MPC. In this work, we extend the MPC-in-the-Head paradigm to game-based cryptographic primitives supporting homomorphic computations (e.g., fully-homomorphic encryption, functional encryption, randomized encodings, homomorphic secret sharing, and more). Specifically, we present a simple yet generic compiler from these primitives to ZKPs which use the underlying primitive as a black box. We also generalize our paradigm to capture commit-and-prove protocols, and use it to devise tight black-box compilers from Interactive (Oracle) Proofs to ZKPs, assuming One-Way Functions (OWFs). We use our paradigm to obtain several new ZKP constructions: 1. The first ZKPs for NP relations R\mathcal{R} computable in (polynomial-time uniform) NC1NC^1, whose round complexity is bounded by a fixed constant (independent of the depth of R\mathcal{R}\u27s verification circuit), with communication approaching witness length (specifically, npoly(κ)n\cdot poly\left(\kappa\right), where nn is the witness length, and κ\kappa is a security parameter), assuming DCR. Alternatively, if we allow the round complexity to scale with the depth of the verification circuit, our ZKPs can make black-box use of OWFs. 2. Constant-round ZKPs for NP relations computable in bounded polynomial space, with O(n)+o(m)poly(κ)O\left(n\right)+o\left(m\right)\cdot poly\left(\kappa\right) communication assuming OWFs, where mm is the instance length. This gives a black-box alternative to a recent non-black-box construction of Nassar and Rothblum (CRYPTO`22). 3. ZKPs for NP relations computable by a logspace-uniform family of depth-d(m)d\left(m\right) circuits, with npoly(κ,d(m))n\cdot poly\left(\kappa,d\left(m\right)\right) communication assuming OWFs. This gives a black-box alternative to a result of Goldwasser, Kalai and Rothblum (JACM)

    Making the Best of a Leaky Situation: Zero-Knowledge PCPs from Leakage-Resilient Circuits

    Get PDF
    A Probabilistically Checkable Proof (PCP) allows a randomized verifier, with oracle access to a purported proof, to probabilistically verify an input statement of the form ``xLx\in L\u27\u27 by querying only few bits of the proof. A zero-knowledge PCP (ZKPCP) is a PCP with the additional guarantee that the view of any verifier querying a bounded number of proof bits can be efficiently simulated given the input xx alone, where the simulated and actual views are statistically close. Originating from the first ZKPCP construction of Kilian et~al.(STOC~\u2797), all previous constructions relied on locking schemes, an unconditionally secure oracle-based commitment primitive. The use of locking schemes makes the verifier \emph{inherently} adaptive, namely, it needs to make at least two rounds of queries to the proof. Motivated by the goal of constructing non-adaptively verifiable ZKPCPs, we suggest a new technique for compiling standard PCPs into ZKPCPs. Our approach is based on leakage-resilient circuits, which are circuits that withstand certain ``side-channel\u27\u27 attacks, in the sense that these attacks reveal nothing about the (properly encoded) input, other than the output. We observe that the verifier\u27s oracle queries constitute a side-channel attack on the wire-values of the circuit verifying membership in LL, so a PCP constructed from a circuit resilient against such attacks would be ZK. However, a leakage-resilient circuit evaluates the desired function \emph{only if} its input is properly encoded, i.e., has a specific structure, whereas by generating a ``proof\u27\u27 from the wire-values of the circuit on an \emph{ill-formed} ``encoded\u27\u27 input, one can cause the verification to accept inputs xLx\notin L \emph{with probability 1}. We overcome this obstacle by constructing leakage-resilient circuits with the additional guarantee that ill-formed encoded inputs are detected. Using this approach, we obtain the following results: \begin{itemize} \sloppy \item We construct the first \emph{witness-indistinguishable} PCPs (WIPCP) for NP with non-adaptive verification. WIPCPs relax ZKPCPs by only requiring that different witnesses be indistinguishable. Our construction combines strong leakage-resilient circuits as above with the PCP of Arora and Safra (FOCS \u2792), in which queries correspond to side-channel attacks by shallow circuits, and with correlation bounds for shallow circuits due to Lovett and Srivinasan (RANDOM \u2711). \item Building on these WIPCPs, we construct non-adaptively verifiable \emph{computational} ZKPCPs for NP in the common random string model, assuming that one-way functions exist. \item As an application of the above results, we construct \emph{3-round} WI and ZK proofs for NP in a distributed setting in which the prover and the verifier interact with multiple servers of which tt can be corrupted, and the total communication involving the verifier consists of \poly\log\left(t\right) bits. \end{itemize

    The Price of Active Security in Cryptographic Protocols

    Get PDF
    We construct the first actively-secure Multi-Party Computation (MPC) protocols with an arbitrary number of parties in the dishonest majority setting, for an arbitrary field F with constant communication overhead over the “passive-GMW” protocol (Goldreich, Micali and Wigderson, STOC ‘87). Our protocols rely on passive implementations of Oblivious Transfer (OT) in the boolean setting and Oblivious Linear function Evaluation (OLE) in the arithmetic setting. Previously, such protocols were only known over sufficiently large fields (Genkin et al. STOC ‘14) or a constant number of parties (Ishai et al. CRYPTO ‘08). Conceptually, our protocols are obtained via a new compiler from a passively-secure protocol for a distributed multiplication functionality FmultF_{mult} , to an actively-secure protocol for general functionalities. Roughly, FmultF_{mult} is parameterized by a linear-secret sharing scheme S, where it takes S-shares of two secrets and returns S-shares of their product. We show that our compilation is concretely efficient for sufficiently large fields, resulting in an over- head of 2 when securely computing natural circuits. Our compiler has two additional benefits: (1) it can rely on any passive implementation of FmultF_{mult}, which, besides the standard implementation based on OT (for boolean) and OLE (for arithmetic) allows us to rely on implementations based on threshold cryptosystems (Cramer et al. Eurocrypt ‘01); and (2) it can rely on weaker-than-passive (i.e., imperfect/leaky) implementations, which in some parameter regimes yield actively-secure protocols with overhead less than 2. Instantiating this compiler with an “honest-majority” implementation of FMULT, we obtain the first honest-majority protocol with optimal corruption threshold for boolean circuits with constant communication overhead over the best passive protocol (Damga&#778;rd and Nielsen, CRYPTO ‘07)
    corecore