55,545 research outputs found

    Atomic Appends: Selling Cars and Coordinating Armies with Multiple Distributed Ledgers

    Get PDF
    The various applications using Distributed Ledger Technologies (DLT) or blockchains, have led to the introduction of a new "marketplace" where multiple types of digital assets may be exchanged. As each blockchain is designed to support specific types of assets and transactions, and no blockchain will prevail, the need to perform interblockchain transactions is already pressing. In this work we examine the fundamental problem of interoperable and interconnected blockchains. In particular, we begin by introducing the Multi-Distributed Ledger Objects (MDLO), which is the result of aggregating multiple Distributed Ledger Objects - DLO (a DLO is a formalization of the blockchain) and that supports append and get operations of records (e.g., transactions) in them from multiple clients concurrently. Next we define the AtomicAppends problem, which emerges when the exchange of digital assets between multiple clients may involve appending records in more than one DLO. Specifically, AtomicAppend requires that either all records will be appended on the involved DLOs or none. We examine the solvability of this problem assuming rational and risk-averse clients that may fail by crashing, and under different client utility and append models, timing models, and client failure scenarios. We show that for some cases the existence of an intermediary is necessary for the problem solution. We propose the implementation of such intermediary over a specialized blockchain, we term Smart DLO (SDLO), and we show how this can be used to solve the AtomicAppends problem even in an asynchronous, client competitive environment, where all the clients may crash

    Distributed Partitioned Big-Data Optimization via Asynchronous Dual Decomposition

    Full text link
    In this paper we consider a novel partitioned framework for distributed optimization in peer-to-peer networks. In several important applications the agents of a network have to solve an optimization problem with two key features: (i) the dimension of the decision variable depends on the network size, and (ii) cost function and constraints have a sparsity structure related to the communication graph. For this class of problems a straightforward application of existing consensus methods would show two inefficiencies: poor scalability and redundancy of shared information. We propose an asynchronous distributed algorithm, based on dual decomposition and coordinate methods, to solve partitioned optimization problems. We show that, by exploiting the problem structure, the solution can be partitioned among the nodes, so that each node just stores a local copy of a portion of the decision variable (rather than a copy of the entire decision vector) and solves a small-scale local problem

    A randomized primal distributed algorithm for partitioned and big-data non-convex optimization

    Full text link
    In this paper we consider a distributed optimization scenario in which the aggregate objective function to minimize is partitioned, big-data and possibly non-convex. Specifically, we focus on a set-up in which the dimension of the decision variable depends on the network size as well as the number of local functions, but each local function handled by a node depends only on a (small) portion of the entire optimization variable. This problem set-up has been shown to appear in many interesting network application scenarios. As main paper contribution, we develop a simple, primal distributed algorithm to solve the optimization problem, based on a randomized descent approach, which works under asynchronous gossip communication. We prove that the proposed asynchronous algorithm is a proper, ad-hoc version of a coordinate descent method and thus converges to a stationary point. To show the effectiveness of the proposed algorithm, we also present numerical simulations on a non-convex quadratic program, which confirm the theoretical results
    • …
    corecore