4,968 research outputs found

    Efficient Constructions for Almost-everywhere Secure Computation

    Get PDF
    The importance of efficient MPC in today\u27s world needs no retelling. An obvious barebones requirement to execute protocols for MPC is the ability of parties to communicate with each other. Traditionally, we solve this problem by assuming that every pair of parties in the network share a dedicated secure link that enables reliable message transmission. This assumption is clearly impractical as the number of nodes in the network grows, as it has today. In their seminal work, Dwork, Peleg, Pippenger and Upfal introduced the notion of almost-everywhere secure primitives in an effort to model the reality of large scale global networks and study the impact of limited connectivity on the properties of fundamental fault-tolerant distributed tasks. In this model, the underlying communication network is sparse and hence some nodes may not even be in a position to participate in the protocol (all their neighbors may be corrupt, for instance). A protocol for almost everywhere reliable message transmission, which would guarantee that a large subset of the network can transmit messages to each other reliably, implies a protocol for almost-everywhere agreement where nodes are required to agree on a value despite malicious or byzantine behavior of some subset of nodes, and an almost-everywhere agreement protocol implies a protocol almost-everywhere secure MPC that is unconditionally or information-theoretically secure. The parameters of interest are the degree dd of the network, the number tt of corrupted nodes that can be tolerated and the number xx of nodes that the protocol may give up. Prior work achieves d=O(1)d = O(1) for t=O(n/logn)t = O(n/\log n) and d=O(logqn)d = O(\log^{q}n) for t=O(n)t = O(n) for some fixed constant q>1q > 1. In this work, we first derive message protocols which are efficient with respect to the total number of computations done across the network. We use this result to show an abundance of networks with d=O(1)d = O(1) that are resilient to t=O(n)t = O(n) random corruptions. This randomized result helps us build networks which are resistant to worst-case adversaries. In particular, we improve the state of the art in the almost everywhere reliable message transmission problem in the worst-case adversary model by showing the existence of an abundance of networks that satisfy d=O(logn)d = O(\log n) for t=O(n)t = O(n), thus making progress on this question after nearly a decade. Finally, we define a new adversarial model of corruptions that is suitable for networks shared amongst a large group of corporations that: (1) do not trust each other, and (2) may collude, and construct optimal networks achieving d=O(1)d = O(1) for t=O(n)t = O(n) in this model

    Adversarial Wiretap Channel with Public Discussion

    Full text link
    Wyner's elegant model of wiretap channel exploits noise in the communication channel to provide perfect secrecy against a computationally unlimited eavesdropper without requiring a shared key. We consider an adversarial model of wiretap channel proposed in [18,19] where the adversary is active: it selects a fraction ρr\rho_r of the transmitted codeword to eavesdrop and a fraction ρw\rho_w of the codeword to corrupt by "adding" adversarial error. It was shown that this model also captures network adversaries in the setting of 1-round Secure Message Transmission [8]. It was proved that secure communication (1-round) is possible if and only if ρr+ρw<1\rho_r + \rho_w <1. In this paper we show that by allowing communicants to have access to a public discussion channel (authentic communication without secrecy) secure communication becomes possible even if ρr+ρw>1\rho_r + \rho_w >1. We formalize the model of \awtppd protocol and for two efficiency measures, {\em information rate } and {\em message round complexity} derive tight bounds. We also construct a rate optimal protocol family with minimum number of message rounds. We show application of these results to Secure Message Transmission with Public Discussion (SMT-PD), and in particular show a new lower bound on transmission rate of these protocols together with a new construction of an optimal SMT-PD protocol

    Breaking the O(n)O(\sqrt n)-Bit Barrier: Byzantine Agreement with Polylog Bits Per Party

    Full text link
    Byzantine agreement (BA), the task of nn parties to agree on one of their input bits in the face of malicious agents, is a powerful primitive that lies at the core of a vast range of distributed protocols. Interestingly, in protocols with the best overall communication, the demands of the parties are highly unbalanced: the amortized cost is O~(1)\tilde O(1) bits per party, but some parties must send Ω(n)\Omega(n) bits. In best known balanced protocols, the overall communication is sub-optimal, with each party communicating O~(n)\tilde O(\sqrt{n}). In this work, we ask whether asymmetry is inherent for optimizing total communication. Our contributions in this line are as follows: 1) We define a cryptographic primitive, succinctly reconstructed distributed signatures (SRDS), that suffices for constructing O~(1)\tilde O(1) balanced BA. We provide two constructions of SRDS from different cryptographic and Public-Key Infrastructure (PKI) assumptions. 2) The SRDS-based BA follows a paradigm of boosting from "almost-everywhere" agreement to full agreement, and does so in a single round. We prove that PKI setup and cryptographic assumptions are necessary for such protocols in which every party sends o(n)o(n) messages. 3) We further explore connections between a natural approach toward attaining SRDS and average-case succinct non-interactive argument systems (SNARGs) for a particular type of NP-Complete problems (generalizing Subset-Sum and Subset-Product). Our results provide new approaches forward, as well as limitations and barriers, towards minimizing per-party communication of BA. In particular, we construct the first two BA protocols with O~(1)\tilde O(1) balanced communication, offering a tradeoff between setup and cryptographic assumptions, and answering an open question presented by King and Saia (DISC'09)

    Enabling Privacy-preserving Auctions in Big Data

    Full text link
    We study how to enable auctions in the big data context to solve many upcoming data-based decision problems in the near future. We consider the characteristics of the big data including, but not limited to, velocity, volume, variety, and veracity, and we believe any auction mechanism design in the future should take the following factors into consideration: 1) generality (variety); 2) efficiency and scalability (velocity and volume); 3) truthfulness and verifiability (veracity). In this paper, we propose a privacy-preserving construction for auction mechanism design in the big data, which prevents adversaries from learning unnecessary information except those implied in the valid output of the auction. More specifically, we considered one of the most general form of the auction (to deal with the variety), and greatly improved the the efficiency and scalability by approximating the NP-hard problems and avoiding the design based on garbled circuits (to deal with velocity and volume), and finally prevented stakeholders from lying to each other for their own benefit (to deal with the veracity). We achieve these by introducing a novel privacy-preserving winner determination algorithm and a novel payment mechanism. Additionally, we further employ a blind signature scheme as a building block to let bidders verify the authenticity of their payment reported by the auctioneer. The comparison with peer work shows that we improve the asymptotic performance of peer works' overhead from the exponential growth to a linear growth and from linear growth to a logarithmic growth, which greatly improves the scalability

    Quantum entropic security and approximate quantum encryption

    Full text link
    We present full generalisations of entropic security and entropic indistinguishability to the quantum world where no assumption but a limit on the knowledge of the adversary is made. This limit is quantified using the quantum conditional min-entropy as introduced by Renato Renner. A proof of the equivalence between the two security definitions is presented. We also provide proofs of security for two different cyphers in this model and a proof for a lower bound on the key length required by any such cypher. These cyphers generalise existing schemes for approximate quantum encryption to the entropic security model.Comment: Corrected mistakes in the proofs of Theorems 3 and 6; results unchanged. To appear in IEEE Transactions on Information Theory

    Chaotic Compilation for Encrypted Computing: Obfuscation but Not in Name

    Get PDF
    An `obfuscation' for encrypted computing is quantified exactly here, leading to an argument that security against polynomial-time attacks has been achieved for user data via the deliberately `chaotic' compilation required for security properties in that environment. Encrypted computing is the emerging science and technology of processors that take encrypted inputs to encrypted outputs via encrypted intermediate values (at nearly conventional speeds). The aim is to make user data in general-purpose computing secure against the operator and operating system as potential adversaries. A stumbling block has always been that memory addresses are data and good encryption means the encrypted value varies randomly, and that makes hitting any target in memory problematic without address decryption, yet decryption anywhere on the memory path would open up many easily exploitable vulnerabilities. This paper `solves (chaotic) compilation' for processors without address decryption, covering all of ANSI C while satisfying the required security properties and opening up the field for the standard software tool-chain and infrastructure. That produces the argument referred to above, which may also hold without encryption.Comment: 31 pages. Version update adds "Chaotic" in title and throughout paper, and recasts abstract and Intro and other sections of the text for better access by cryptologists. To the same end it introduces the polynomial time defense argument explicitly in the final section, having now set that denouement out in the abstract and intr
    corecore