9 research outputs found

    Probabilistic Proof Systems

    Get PDF
    Various types of probabilistic proof systems have played a central role in the development of computer science in the last decade. In this exposition, we concentrate on three such proof systems -- interactive proofs, zero-knowledge proofs, and probabilistic checkable proofs -- stressing the essential role of randomness in each of them. This exposition is an expanded version of a survey written for the proceedings of the International Congress of Mathematicians (ICM94) held in Zurich in 1994. It is hope that this exposition may be accessible to a broad audience of computer scientists and mathematians

    Zero-Knowledge Proofs of Proximity

    Get PDF
    Interactive proofs of proximity (IPPs) are interactive proofs in which the verifier runs in time sub-linear in the input length. Since the verifier cannot even read the entire input, following the property testing literature, we only require that the verifier reject inputs that are far from the language (and, as usual, accept inputs that are in the language). In this work, we initiate the study of zero-knowledge proofs of proximity (ZKPP). A ZKPP convinces a sub-linear time verifier that the input is close to the language (similarly to an IPP) while simultaneously guaranteeing a natural zero-knowledge property. Specifically, the verifier learns nothing beyond (1) the fact that the input is in the language, and (2) what it could additionally infer by reading a few bits of the input. Our main focus is the setting of statistical zero-knowledge where we show that the following hold unconditionally (where N denotes the input length): - Statistical ZKPPs can be sub-exponentially more efficient than property testers (or even non-interactive IPPs): We show a natural property which has a statistical ZKPP with a polylog(N) time verifier, but requires Omega(sqrt(N)) queries (and hence also runtime) for every property tester. - Statistical ZKPPs can be sub-exponentially less efficient than IPPs: We show a property which has an IPP with a polylog(N) time verifier, but cannot have a statistical ZKPP with even an N^(o(1)) time verifier. - Statistical ZKPPs for some graph-based properties such as promise versions of expansion and bipartiteness, in the bounded degree graph model, with polylog(N) time verifiers exist. Lastly, we also consider the computational setting where we show that: - Assuming the existence of one-way functions, every language computable either in (logspace uniform) NC or in SC, has a computational ZKPP with a (roughly) sqrt(N) time verifier. - Assuming the existence of collision-resistant hash functions, every language in NP has a statistical zero-knowledge argument of proximity with a polylog(N) time verifier

    The knowledge complexity of quadratic residuosity languages

    Get PDF
    AbstractNoninteractive perfect zero-knowledge (ZK) proofs are very elusive objects. In fact, since the introduction of the noninteractive model of Blum . (1988), the only perfect zero-knowledge proof known was the one for quadratic nonresiduosity of Blum . (1991). The situation is no better in the interactive case where perfect zero-knowledge proofs are known only for a handful of particular languages.In this work, we show that a large class of languages related to quadratic residuosity admits noninteractive perfect zero-knowledge proofs. More precisely, we give a protocol for the language of thresholds of quadratic residuosity.Moreover, we develop a new technique for converting noninteractive zero-knowledge proofs into round-optimal zero-knowledge proofs for an even wider class of languages. The transformation preserves perfect zero knowledge in the sense that, if the noninteractive proof we started with is a perfect zero-knowledge proof, then we obtain a round-optimal perfect zero-knowledge proof. The noninteractive perfect zero-knowledge proofs presented in this work can be transformed into 4-round (which is optimal) interactive perfect zero-knowledge proofs. Until now, the only known 4-round perfect ZK proof systems were the ones for quadratic nonresiduosity (Goldwasser et al., 1989) and for graph nonisomorphism (Goldreich et al., 1986) and no 4-round perfect zero-knowledge proof system was known for the simple case of the language of quadratic residues

    Quantification of information flow in cyber physical systems

    Get PDF
    In Cyber Physical Systems (CPSs), traditional security mechanisms such as cryptography and access control are not enough to ensure the security of the system since complex interactions between the cyber portion and physical portion happen frequently. In particular, the physical infrastructure is inherently observable; aggregated physical observations can lead to unintended cyber information leakage. Information flow analysis, which aims to control the way information flows among different entities, is better suited for CPSs than the access control security mechanism. However, quantifying information leakage in CPSs can be challenging due to the flow of implicit information between the cyber portion, the physical portion, and the outside world. Within algorithmic theory, the online problem considers inputs that arrive one by one and deals with extracting the algorithmic solution through an advice tape without knowing some parts of the input. This dissertation focuses on statistical methods to quantify information leakage in CPSs due to algorithmic leakages, especially CPSs that allocate constrained resources. The proposed framework is based on the advice tape concept of algorithmically quantifying information leakage and statistical analysis. With aggregated physical observations, the amount of information leakage of the constrained resource due to the cyber algorithm can be quantified through the proposed algorithms. An electric smart grid has been used as an example to develop confidence intervals of information leakage within a real CPS. The characteristic of the physical system, which is represented as an invariant, is also considered and influences the information quantification results. The impact of this work is that it allows the user to express an observer\u27s uncertainty about a secret as a function of the revealed part. Thus, it can be used as an algorithmic design in a CPS to allocate resources while maximizing the uncertainty of the information flow to an observer --Abstract, page iii

    The Mother of All Leakages: How to Simulate Noisy Leakages via Bounded Leakage (Almost) for Free

    Get PDF
    We show that noisy leakage can be simulated in the information-theoretic setting using a single query of bounded leakage, up to a small statistical simulation error and a slight loss in the leakage parameter. The latter holds true in particular for one of the most used noisy-leakage models, where the noisiness is measured using the conditional average min-entropy (Naor and Segev, CRYPTO\u2709 and SICOMP\u2712). Our reductions between noisy and bounded leakage are achieved in two steps. First, we put forward a new leakage model (dubbed the dense leakage model) and prove that dense leakage can be simulated in the information-theoretic setting using a single query of bounded leakage, up to small statistical distance. Second, we show that the most common noisy-leakage models fall within the class of dense leakage, with good parameters. We also provide a complete picture of the relationships between different noisy-leakage models, and prove lower bounds showing that our reductions are nearly optimal. Our result finds applications to leakage-resilient cryptography, where we are often able to lift security in the presence of bounded leakage to security in the presence of noisy leakage, both in the information-theoretic and in the computational setting. Additionally, we show how to use lower bounds in communication complexity to prove that bounded-collusion protocols (Kumar, Meka, and Sahai, FOCS\u2719) for certain functions do not only require long transcripts, but also necessarily need to reveal enough information about the inputs

    What Information is Leaked under Concurrent Composition?

    Get PDF
    Achieving security under concurrent composition is notoriously hard. Indeed, in the plain model, far reaching impossibility results for concurrently secure computation are known. On the other hand, some positive results have also been obtained according to various weaker notions of security (such as by using a super-polynomial time simulator). This suggest that somehow, ``not all is lost in the concurrent setting. In this work, we ask what and exactly how much private information can the adversary learn by launching a concurrent attack? ``Can he learn all the private inputs in all the sessions? Or, can we preserve the security of some (or even most) of the sessions fully while compromising (a small fraction of) other sessions? Or is it the case that the security of all (or most) sessions is (at least partially) compromised? If so, can we restrict him to learn an arbitrarily small fraction of input in each session? We believe the above questions to be fundamental to the understanding of concurrent composition. Indeed, despite a large body of work on the study of concurrent composition, in our opinion, the understanding of what exactly is it that goes wrong in the concurrent setting and to what extent is currently quite unsatisfactory. Towards that end, we adopt the knowledge-complexity based approach of Goldreich and Petrank [STOC\u2791] to quantify information leakage in concurrently secure computation. We consider a model where the ideal world adversary (a.k.a simulator) is allowed to query the trusted party for some ``leakage\u27\u27 on the honest party inputs. We obtain both positive and negative results, depending upon the nature of the leakage queries available to the simulator. Informally speaking, our results imply the following: in the concurrent setting, ``significant loss of security (translating to high leakage in the ideal world) in some of the sessions is unavoidable if one wishes to obtain a general result. However on the brighter side, one can make the fraction of such sessions to be an arbitrarily small polynomial (while fully preserving the security in all other sessions). Our results also have an implication on secure computation in the bounded concurrent setting [Barak-FOCS\u2701]: we show there exist protocols which are secure as per the standard ideal/real world notion in the bounded concurrent setting. However if the actual number of sessions happen to exceed the bound, there is a graceful degradation of security as the number of sessions increase. (In contrast, prior results do not provide any security once the bound is exceeded.) In order to obtain our positive result, we model concurrent extraction as the classical set-covering problem and develop, as our main technical contribution, a new sparse rewinding strategy. Specifically, unlike previous rewinding strategies which are very ``dense\u27\u27, we rewind ``small intervals\u27\u27 of the execution transcript and still guarantee extraction. This yields other applications as well, including improved constructions of precise concurrent zero-knowledge [Pandey et al.-Eurocrypt\u2708] and concurrently secure computation in the multiple ideal query model [Goyal et al.-Crypto\u2710]. In order to obtain our negative results, interestingly, we employ techniques from the regime of leakage-resilient cryptography [Dziembowski-Pietrzak-FOCS\u2708]

    A study of statistical zero-knowledge proofs

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 1999.Includes bibliographical references (p. 181-190).by Salil Pravin Vadhan.Ph.D

    Quantifying Knowledge Complexity

    No full text
    One of the many contributions of the paper of Goldwasser, Micali and Rackoff is the introduction of the notion of knowledge complexity. Knowledge complexity zero (also known as zero-knowledge) have received most of the attention of the authors and all the attention of their followers. In this paper, we present several alternative definitions of knowledge complexity and investigate the relations between them. An extended abstract of this paper appeared in the 32nd Annual IEEE Symposium on the Foundations of Computer Science (FOCS91) held in San Juan, Puerto Rico, October 1991. y Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot, Israel. E-mail: [email protected]. z Computer Science Department, Technion -- Israel Institute of Technology, Haifa 32000, Israel. E-mail: [email protected]. 1 Introduction One of the many contributions of the seminal paper of Goldwasser, Micali and Rackoff [16] is the introduction of the notion..
    corecore