22,365 research outputs found

    Multi-Party Protocols, Information Complexity and Privacy

    Get PDF
    We introduce the new measure of Public Information Complexity (PIC), as a tool for the study of multi-party computation protocols, and of quantities such as their communication complexity, or the amount of randomness they require in the context of information-theoretic private computations. We are able to use this measure directly in the natural asynchronous message-passing peer-to-peer model and show a number of interesting properties and applications of our new notion: the Public Information Complexity is a lower bound on the Communication Complexity and an upper bound on the Information Complexity; the difference between the Public Information Complexity and the Information Complexity provides a lower bound on the amount of randomness used in a protocol; any communication protocol can be compressed to its Public Information Cost; an explicit calculation of the zero-error Public Information Complexity of the k-party, n-bit Parity function, where a player outputs the bit-wise parity of the inputs. The latter result establishes that the amount of randomness needed for a private protocol that computes this function is Omega(n)

    On the Communication Complexity of Secure Computation

    Full text link
    Information theoretically secure multi-party computation (MPC) is a central primitive of modern cryptography. However, relatively little is known about the communication complexity of this primitive. In this work, we develop powerful information theoretic tools to prove lower bounds on the communication complexity of MPC. We restrict ourselves to a 3-party setting in order to bring out the power of these tools without introducing too many complications. Our techniques include the use of a data processing inequality for residual information - i.e., the gap between mutual information and G\'acs-K\"orner common information, a new information inequality for 3-party protocols, and the idea of distribution switching by which lower bounds computed under certain worst-case scenarios can be shown to apply for the general case. Using these techniques we obtain tight bounds on communication complexity by MPC protocols for various interesting functions. In particular, we show concrete functions that have "communication-ideal" protocols, which achieve the minimum communication simultaneously on all links in the network. Also, we obtain the first explicit example of a function that incurs a higher communication cost than the input length in the secure computation model of Feige, Kilian and Naor (1994), who had shown that such functions exist. We also show that our communication bounds imply tight lower bounds on the amount of randomness required by MPC protocols for many interesting functions.Comment: 37 page

    How to Securely Compute the Modulo-Two Sum of Binary Sources

    Full text link
    In secure multiparty computation, mutually distrusting users in a network want to collaborate to compute functions of data which is distributed among the users. The users should not learn any additional information about the data of others than what they may infer from their own data and the functions they are computing. Previous works have mostly considered the worst case context (i.e., without assuming any distribution for the data); Lee and Abbe (2014) is a notable exception. Here, we study the average case (i.e., we work with a distribution on the data) where correctness and privacy is only desired asymptotically. For concreteness and simplicity, we consider a secure version of the function computation problem of K\"orner and Marton (1979) where two users observe a doubly symmetric binary source with parameter p and the third user wants to compute the XOR. We show that the amount of communication and randomness resources required depends on the level of correctness desired. When zero-error and perfect privacy are required, the results of Data et al. (2014) show that it can be achieved if and only if a total rate of 1 bit is communicated between every pair of users and private randomness at the rate of 1 is used up. In contrast, we show here that, if we only want the probability of error to vanish asymptotically in block length, it can be achieved by a lower rate (binary entropy of p) for all the links and for private randomness; this also guarantees perfect privacy. We also show that no smaller rates are possible even if privacy is only required asymptotically.Comment: 6 pages, 1 figure, extended version of submission to IEEE Information Theory Workshop, 201

    The Quantum Frontier

    Full text link
    The success of the abstract model of computation, in terms of bits, logical operations, programming language constructs, and the like, makes it easy to forget that computation is a physical process. Our cherished notions of computation and information are grounded in classical mechanics, but the physics underlying our world is quantum. In the early 80s researchers began to ask how computation would change if we adopted a quantum mechanical, instead of a classical mechanical, view of computation. Slowly, a new picture of computation arose, one that gave rise to a variety of faster algorithms, novel cryptographic mechanisms, and alternative methods of communication. Small quantum information processing devices have been built, and efforts are underway to build larger ones. Even apart from the existence of these devices, the quantum view on information processing has provided significant insight into the nature of computation and information, and a deeper understanding of the physics of our universe and its connections with computation. We start by describing aspects of quantum mechanics that are at the heart of a quantum view of information processing. We give our own idiosyncratic view of a number of these topics in the hopes of correcting common misconceptions and highlighting aspects that are often overlooked. A number of the phenomena described were initially viewed as oddities of quantum mechanics. It was quantum information processing, first quantum cryptography and then, more dramatically, quantum computing, that turned the tables and showed that these oddities could be put to practical effect. It is these application we describe next. We conclude with a section describing some of the many questions left for future work, especially the mysteries surrounding where the power of quantum information ultimately comes from.Comment: Invited book chapter for Computation for Humanity - Information Technology to Advance Society to be published by CRC Press. Concepts clarified and style made more uniform in version 2. Many thanks to the referees for their suggestions for improvement

    Finite state verifiers with constant randomness

    Full text link
    We give a new characterization of NL\mathsf{NL} as the class of languages whose members have certificates that can be verified with small error in polynomial time by finite state machines that use a constant number of random bits, as opposed to its conventional description in terms of deterministic logarithmic-space verifiers. It turns out that allowing two-way interaction with the prover does not change the class of verifiable languages, and that no polynomially bounded amount of randomness is useful for constant-memory computers when used as language recognizers, or public-coin verifiers. A corollary of our main result is that the class of outcome problems corresponding to O(log n)-space bounded games of incomplete information where the universal player is allowed a constant number of moves equals NL.Comment: 17 pages. An improved versio

    A Shannon Approach to Secure Multi-party Computations

    Full text link
    In secure multi-party computations (SMC), parties wish to compute a function on their private data without revealing more information about their data than what the function reveals. In this paper, we investigate two Shannon-type questions on this problem. We first consider the traditional one-shot model for SMC which does not assume a probabilistic prior on the data. In this model, private communication and randomness are the key enablers to secure computing, and we investigate a notion of randomness cost and capacity. We then move to a probabilistic model for the data, and propose a Shannon model for discrete memoryless SMC. In this model, correlations among data are the key enablers for secure computing, and we investigate a notion of dependency which permits the secure computation of a function. While the models and questions are general, this paper focuses on summation functions, and relies on polar code constructions

    Lagrange Coded Computing: Optimal Design for Resiliency, Security and Privacy

    Get PDF
    We consider a scenario involving computations over a massive dataset stored distributedly across multiple workers, which is at the core of distributed learning algorithms. We propose Lagrange Coded Computing (LCC), a new framework to simultaneously provide (1) resiliency against stragglers that may prolong computations; (2) security against Byzantine (or malicious) workers that deliberately modify the computation for their benefit; and (3) (information-theoretic) privacy of the dataset amidst possible collusion of workers. LCC, which leverages the well-known Lagrange polynomial to create computation redundancy in a novel coded form across workers, can be applied to any computation scenario in which the function of interest is an arbitrary multivariate polynomial of the input dataset, hence covering many computations of interest in machine learning. LCC significantly generalizes prior works to go beyond linear computations. It also enables secure and private computing in distributed settings, improving the computation and communication efficiency of the state-of-the-art. Furthermore, we prove the optimality of LCC by showing that it achieves the optimal tradeoff between resiliency, security, and privacy, i.e., in terms of tolerating the maximum number of stragglers and adversaries, and providing data privacy against the maximum number of colluding workers. Finally, we show via experiments on Amazon EC2 that LCC speeds up the conventional uncoded implementation of distributed least-squares linear regression by up to 13.43×13.43\times, and also achieves a 2.36×2.36\times-12.65×12.65\times speedup over the state-of-the-art straggler mitigation strategies
    • …
    corecore