10 research outputs found

    Notes On GGH13 Without The Presence Of Ideals

    Get PDF
    We investigate the merits of altering the Garg, Gentry and Halevi (GGH13) graded encoding scheme to remove the presence of the ideal g\langle g \rangle. In particular, we show that we can alter the form of encodings so that effectively a new gig_i is used for each source group Gi\mathbb{G}_i, while retaining correctness. This would appear to prevent all known attacks on indistinguishability obfuscation (IO) candidates instantiated using GGH13. However, when analysing security in simplified branching program and obfuscation security models, we present branching program (and thus IO) distinguishing attacks that do not use knowledge of g\langle g \rangle. This result opens a counterpoint with the work of Halevi (EPRINT 2015) which stated that the core computational hardness problem underpinning GGH13 is computing a basis of this ideal. Our attempts seem to suggest that there is a structural vulnerability in the way that GGH13 encodings are constructed that lies deeper than the presence of g\langle g \rangle

    Blazing Fast OT for Three-Round UC OT Extension

    Get PDF
    Oblivious Transfer (OT) is an important building block for multi-party computation (MPC). Since OT requires expensive public-key operations, efficiency-conscious MPC protocols use an OT extension (OTE) mechanism [Beaver 96, Ishai et al. 03] to provide the functionality of many independent OT instances with the same sender and receiver, using only symmetric-key operations plus few instances of some base OT protocol. Consequently there is significant interest in constructing OTE friendly protocols, namely protocols that, when used as base-OT for OTE, result in extended OT that are both round-efficient and cost-efficient. We present the most efficient OTE-friendly protocol to date. Specifically: - Our base protocol incurs only 3 exponentiations per instance. - Our base protocol results in a 3 round extended OT protocol. - The extended protocol is UC secure in the Observable Random Oracle Model (ROM) under the CDH assumption. For comparison, the state of the art for base OTs that result in 3-round OTE are proven only in the programmable ROM, and require 4 exponentiations under Interactive DDH or 6 exponentiations under DDH [Masney-Rindal 19]. We also implement our protocol and benchmark it against the Simplest OT protocol [Chou and Orlandi, Latincrypt 2015], which is the most efficient and widely used OT protocol but not known to suffice for OTE. The computation cost is roughly the same in both cases. Interestingly, our base OT is also 3 rounds. However, we slightly modify the extension mechanism (which normally adds a round) so as to preserve the number of rounds in our case

    Black-Box Computational Zero-Knowledge Proofs, Revisited: The Simulation-Extraction Paradigm

    Get PDF
    The concept of zero-knowledge proofs has been around for about 25 years. It has been redefined over and over to suit the special security requirements of protocols and systems. Common among all definitions is the requirement of the existence of some efficient ``device\u27\u27 simulating the view of the verifier (or the transcript of the protocol), such that the simulation is indistinguishable from the reality. The definitions differ in many respects, including the type and power of the devices, the order of quantifiers, the type of indistinguishability, and so on. In this paper, we will scrutinize the definition of ``black-box computational\u27\u27 zero-knowledge, in which there exists one simulator \emph{for all} verifiers, the simulator has black-box access to the verifier, and the quality of simulation is such that the real and simulated views cannot be distinguished by polynomial tests (\emph{computational} indistinguishability). Working in a theoretical model (the Random-Oracle Model), we show that the indistinguishability requirement is stated in a \emph{conceptually} inappropriate way: Present definitions allow the knowledge of the \emph{verifier} and \emph{distinguisher} to be independent, while the two entities are essentially coupled. Therefore, our main take on the problem will be \emph{conceptual} and \emph{semantic}, rather than \emph{literal}. We formalize the concept by introducing a ``knowledge extractor\u27\u27 into the model, which tries to extract the extra knowledge hard-coded into the distinguisher (if any), and then helps the simulator to construct the view of the verifier. The new paradigm is termed \emph{Simulation-Extraction Paradigm}, as opposed to the previous \emph{Simulation Paradigm}. We also provide an important application of the new formalization: Using the simulation-extraction paradigm, we construct one-round (i.e. two-move) zero-knowledge protocols of proving ``the computational ability to invert some trapdoor permutation\u27\u27 in the Random-Oracle Model. It is shown that the protocol cannot be proven zero-knowledge in the classical Simulation Paradigm. The proof of the zero-knowledge property in the new paradigm is interesting in that it does not require knowing the internal structure of the trapdoor permutation, or a polynomial-time reduction from it to another (e.g. an NP\mathcal{NP}-complete) problem

    PCD

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Page 96 blank. Cataloged from PDF version of thesis.Includes bibliographical references (p. 87-95).The security of systems can often be expressed as ensuring that some property is maintained at every step of a distributed computation conducted by untrusted parties. Special cases include integrity of programs running on untrusted platforms, various forms of confidentiality and side-channel resilience, and domain-specific invariants. We propose a new approach, proof-carrying data (PCD), which sidesteps the threat of faults and leakage by reasoning about properties of a computation's output data, regardless of the process that produced it. In PCD, the system designer prescribes the desired properties of a computation's outputs. Corresponding proofs are attached to every message flowing through the system, and are mutually verified by the system's components. Each such proof attests that the message's data and all of its history comply with the prescribed properties. We construct a general protocol compiler that generates, propagates, and verifies such proofs of compliance, while preserving the dynamics and efficiency of the original computation. Our main technical tool is the cryptographic construction of short non-interactive arguments (computationally-sound proofs) for statements whose truth depends on "hearsay evidence": previous arguments about other statements. To this end, we attain a particularly strong proof-of-knowledge property. We realize the above, under standard cryptographic assumptions, in a model where the prover has blackbox access to some simple functionality - essentially, a signature card.by Alessandro Chiesa.M.Eng

    Efficient and Round-Optimal Oblivious Transfer and Commitment with Adaptive Security

    Get PDF
    We construct the most efficient two-round adaptively secure bit-OT in the Common Random String (CRS) model. The scheme is UC secure under the Decisional Diffie-Hellman (DDH) assumption. It incurs O(1) exponentiations and sends O(1) group elements, whereas the state of the art requires O(k^2) exponentiations and communicates poly(k) bits, where k is the computational security parameter. Along the way, we obtain several other efficient UC-secure OT protocols under DDH : - The most efficient yet two-round adaptive string-OT protocol assuming programmable random oracle. Furthermore, the protocol can be made non-interactive in the simultaneous message setting, assuming random inputs for the sender. - The first two-round string-OT with amortized constant exponentiations and communication overhead which is secure in the observable random oracle model. - The first two-round receiver equivocal string-OT in the CRS model that incurs constant computation and communication overhead. We also obtain the first non-interactive adaptive string UC-commitment in the CRS model which incurs a sublinear communication overhead in the security parameter. Specifically, we commit to polylog(k) bits while communicating O(k) bits. Moreover, it is additively homomorphic in nature. We can also extend our results to the single CRS model where multiple sessions share the same CRS. As a corollary, we obtain a two-round adaptively secure MPC protocol in this model

    Delegating computation reliably : paradigms and constructions

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 285-297).In an emerging computing paradigm, computational capabilities, from processing power to storage capacities, are offered to users over communication networks as a service. This new paradigm holds enormous promise for increasing the utility of computationally weak devices. A natural approach is for weak devices to delegate expensive tasks, such as storing a large file or running a complex computation, to more powerful entities (say servers) connected to the same network. While the delegation approach seems promising, it raises an immediate concern: when and how can a weak device verify that a computational task was completed correctly? This practically motivated question touches on foundational questions in cryptography and complexity theory. The focus of this thesis is verifying the correctness of delegated computations. We construct efficient protocols (interactive proofs) for delegating computational tasks. In particular, we present: e A protocol for delegating any computation, where the work needed to verify the correctness of the output is linear in the input length, polynomial in the computation's depth, and only poly-logarithmic in the computation's size. The space needed for verification is only logarithmic in the computation size. Thus, for any computation of polynomial size and poly-logarithmic depth (the rich complexity class N/C), the work required to verify the correctness of the output is only quasi-linear in the input length. The work required to prove the output's correctness is only polynomial in the original computation's size. This protocol also has applications to constructing one-round arguments for delegating computation, and efficient zero-knowledge proofs. * A general transformation, reducing the parallel running time (or computation depth) of the verifier in protocols for delegating computation (interactive proofs) to be constant. Next, we explore the power of the delegation paradigm in settings where mutually distrustful parties interact. In particular, we consider the settings of checking the correctness of computer programs and of designing error-correcting codes. We show: * A new methodology for checking the correctness of programs (program checking), in which work is delegated from the program checker to the untrusted program being checked. Using this methodology we obtain program checkers for an entire complexity class (the class of N/C¹-computations that are WNC-hard), and for a slew of specific functions such as matrix multiplication, inversion, determinant and rank, as well as graph functions such as connectivity, perfect matching and bounded-degree graph isomorphism. * A methodology for designing error-correcting codes with efficient decoding procedures, in which work is delegated from the decoder to the encoder. We use this methodology to obtain constant-depth (AC⁰) locally decodable and locally-list decodable codes. We also show that the parameters of these codes are optimal (up to polynomial factors) for constant-depth decoding.by Guy N. Rothblum.Ph.D
    corecore