96,419 research outputs found

    On Collision-fast Atomic Broadcast

    Get PDF
    Atomic Broadcast, an important abstraction in dependable distributed computing, is usually implemented by many instances of the well-known consensus problem. Some asynchronous consensus algorithms achieve the optimal latency of two (message) steps but cannot guarantee this latency even in good runs, with quick message delivery and no crashes. This is due to collisions, a result of concurrent proposals. Collision-fast consensus algorithms, which decide within two steps in good runs, exist under certain conditions. Their direct application to solving atomic broadcast, though, does not guarantee delivery in two steps for all messages unless a single failure is tolerated. We show a simple way to build a fault-tolerant collision-fast Atomic Broadcast algorithm based on a variation of the consensus problem we call M-Consensus. Our solution to M-Consensus extends the Paxos protocol to allow multiple processes, instead of the single leader, to have their proposals learned in two steps

    Proof of Luck: an Efficient Blockchain Consensus Protocol

    Full text link
    In the paper, we present designs for multiple blockchain consensus primitives and a novel blockchain system, all based on the use of trusted execution environments (TEEs), such as Intel SGX-enabled CPUs. First, we show how using TEEs for existing proof of work schemes can make mining equitably distributed by preventing the use of ASICs. Next, we extend the design with proof of time and proof of ownership consensus primitives to make mining energy- and time-efficient. Further improving on these designs, we present a blockchain using a proof of luck consensus protocol. Our proof of luck blockchain uses a TEE platform's random number generation to choose a consensus leader, which offers low-latency transaction validation, deterministic confirmation time, negligible energy consumption, and equitably distributed mining. Lastly, we discuss a potential protection against up to a constant number of compromised TEEs.Comment: SysTEX '16, December 12-16, 2016, Trento, Ital

    Random consensus protocol in large-scale networks

    Get PDF
    One of the main performance issues for consensus protocols is the convergence speed. In this paper, we focus on the convergence behavior of discrete-time consensus protocols over large-scale sensor networks with uniformly random deployment, which are modelled as Poisson random graphs. Instead of using the random rewiring procedure, we introduce a deterministic principle to locate certain “chosen nodes” in the network and add “virtual” shortcuts among them so that the number of iterations to achieve average consensus drops dramatically. Simulation results are presented to verify the efficiency of this approach. Moreover, a random consensus protocol is proposed, in which virtual shortcuts are implemented by random routes

    Distributed Big-Data Optimization via Block-Iterative Convexification and Averaging

    Full text link
    In this paper, we study distributed big-data nonconvex optimization in multi-agent networks. We consider the (constrained) minimization of the sum of a smooth (possibly) nonconvex function, i.e., the agents' sum-utility, plus a convex (possibly) nonsmooth regularizer. Our interest is in big-data problems wherein there is a large number of variables to optimize. If treated by means of standard distributed optimization algorithms, these large-scale problems may be intractable, due to the prohibitive local computation and communication burden at each node. We propose a novel distributed solution method whereby at each iteration agents optimize and then communicate (in an uncoordinated fashion) only a subset of their decision variables. To deal with non-convexity of the cost function, the novel scheme hinges on Successive Convex Approximation (SCA) techniques coupled with i) a tracking mechanism instrumental to locally estimate gradient averages; and ii) a novel block-wise consensus-based protocol to perform local block-averaging operations and gradient tacking. Asymptotic convergence to stationary solutions of the nonconvex problem is established. Finally, numerical results show the effectiveness of the proposed algorithm and highlight how the block dimension impacts on the communication overhead and practical convergence speed

    Consensus of self-driven agents with avoidance of collisions

    Get PDF
    In recent years, many efforts have been addressed on collision avoidance of collectively moving agents. In this paper, we propose a modified version of the Vicsek model with adaptive speed, which can guarantee the absence of collisions. However, this strategy leads to an aggregated state with slowly moving agents. We therefore further introduce a certain repulsion, which results in both faster consensus and longer safe distance among agents, and thus provides a powerful mechanism for collective motions in biological and technological multi-agent systems.Comment: 8 figures, and 7 page

    Multi-Path Alpha-Fair Resource Allocation at Scale in Distributed Software Defined Networks

    Get PDF
    The performance of computer networks relies on how bandwidth is shared among different flows. Fair resource allocation is a challenging problem particularly when the flows evolve over time. To address this issue, bandwidth sharing techniques that quickly react to the traffic fluctuations are of interest, especially in large scale settings with hundreds of nodes and thousands of flows. In this context, we propose a distributed algorithm based on the Alternating Direction Method of Multipliers (ADMM) that tackles the multi-path fair resource allocation problem in a distributed SDN control architecture. Our ADMM-based algorithm continuously generates a sequence of resource allocation solutions converging to the fair allocation while always remaining feasible, a property that standard primal-dual decomposition methods often lack. Thanks to the distribution of all computer intensive operations, we demonstrate that we can handle large instances at scale
    corecore