4,194 research outputs found

    Resource-Efficient and Robust Distributed Computing

    Get PDF
    There has been a tremendous growth in the size of distributed systems in the past three decades. Today, distributed systems, such as the Internet, have become so large that they require highly scalable algorithms; algorithms that have asymptotically-small communication, computation, and latency costs with respect to the network size. Moreover, systems with thousands or even millions of parties distributed throughout the world is likely in danger of faults from untrusted parties. In this dissertation, we study scalable and secure distributed algorithms that can tolerate faults from untrusted parties. Throughout this work, we balance two important and often conflicting characteristics of distributed protocols: security and efficiency. Our first result is a protocol that solves the MPC problem in polylogarithmic communication and computation cost and is secure against an adversary than can corrupt a third of the parties. We adapted our synchronous MPC protocol to the asynchronous setting when the fraction of the corrupted parties are less than 1/8. Next, we presented a scalable protocol that solves the secret sharing problem between rational parties in polylogarithmic communication and computation cost. Furthermore, we presented a protocol that can solve the interactive communication problem over a noisy channel when the noise rate in unknown. In this problem, we have focused on the cost of the protocol in the resource-competitive analysis model. Unlike classic models, resource-competitive models consider the cost that the adversary must pay to succeed in corrupting the protocol

    Breaking the O(n^2) Bit Barrier: Scalable Byzantine agreement with an Adaptive Adversary

    Full text link
    We describe an algorithm for Byzantine agreement that is scalable in the sense that each processor sends only O~(n)\tilde{O}(\sqrt{n}) bits, where nn is the total number of processors. Our algorithm succeeds with high probability against an \emph{adaptive adversary}, which can take over processors at any time during the protocol, up to the point of taking over arbitrarily close to a 1/3 fraction. We assume synchronous communication but a \emph{rushing} adversary. Moreover, our algorithm works in the presence of flooding: processors controlled by the adversary can send out any number of messages. We assume the existence of private channels between all pairs of processors but make no other cryptographic assumptions. Finally, our algorithm has latency that is polylogarithmic in nn. To the best of our knowledge, ours is the first algorithm to solve Byzantine agreement against an adaptive adversary, while requiring o(n2)o(n^{2}) total bits of communication

    Consensus in Equilibrium: Can One Against All Decide Fairly?

    Get PDF
    Is there an equilibrium for distributed consensus when all agents except one collude to steer the decision value towards their preference? If an equilibrium exists, then an n-1 size coalition cannot do better by deviating from the algorithm, even if it prefers a different decision value. We show that an equilibrium exists under this condition only if the number of agents in the network is odd and the decision is binary (among two possible input values). That is, in this framework we provide a separation between binary and multi-valued consensus. Moreover, the input and output distribution must be uniform, regardless of the communication model (synchronous or asynchronous). Furthermore, we define a new problem - Resilient Input Sharing (RIS), and use it to find an iff condition for the (n-1)-resilient equilibrium for deterministic binary consensus, essentially showing that an equilibrium for deterministic consensus is equivalent to each agent learning all the other inputs in some strong sense. Finally, we note that (n-2)-resilient equilibrium for binary consensus is possible for any n. The case of (n-2)-resilient equilibrium for multi-valued consensus is left open

    Systematizing Decentralization and Privacy: Lessons from 15 Years of Research and Deployments

    Get PDF
    Decentralized systems are a subset of distributed systems where multiple authorities control different components and no authority is fully trusted by all. This implies that any component in a decentralized system is potentially adversarial. We revise fifteen years of research on decentralization and privacy, and provide an overview of key systems, as well as key insights for designers of future systems. We show that decentralized designs can enhance privacy, integrity, and availability but also require careful trade-offs in terms of system complexity, properties provided, and degree of decentralization. These trade-offs need to be understood and navigated by designers. We argue that a combination of insights from cryptography, distributed systems, and mechanism design, aligned with the development of adequate incentives, are necessary to build scalable and successful privacy-preserving decentralized systems
    • …
    corecore