262 research outputs found

    D-ADMM: A Communication-Efficient Distributed Algorithm For Separable Optimization

    Full text link
    We propose a distributed algorithm, named Distributed Alternating Direction Method of Multipliers (D-ADMM), for solving separable optimization problems in networks of interconnected nodes or agents. In a separable optimization problem there is a private cost function and a private constraint set at each node. The goal is to minimize the sum of all the cost functions, constraining the solution to be in the intersection of all the constraint sets. D-ADMM is proven to converge when the network is bipartite or when all the functions are strongly convex, although in practice, convergence is observed even when these conditions are not met. We use D-ADMM to solve the following problems from signal processing and control: average consensus, compressed sensing, and support vector machines. Our simulations show that D-ADMM requires less communications than state-of-the-art algorithms to achieve a given accuracy level. Algorithms with low communication requirements are important, for example, in sensor networks, where sensors are typically battery-operated and communicating is the most energy consuming operation.Comment: To appear in IEEE Transactions on Signal Processin

    Distributed Optimization With Local Domains: Applications in MPC and Network Flows

    Full text link
    In this paper we consider a network with PP nodes, where each node has exclusive access to a local cost function. Our contribution is a communication-efficient distributed algorithm that finds a vector x⋆x^\star minimizing the sum of all the functions. We make the additional assumption that the functions have intersecting local domains, i.e., each function depends only on some components of the variable. Consequently, each node is interested in knowing only some components of x⋆x^\star, not the entire vector. This allows for improvement in communication-efficiency. We apply our algorithm to model predictive control (MPC) and to network flow problems and show, through experiments on large networks, that our proposed algorithm requires less communications to converge than prior algorithms.Comment: Submitted to IEEE Trans. Aut. Contro

    Distributed Basis Pursuit

    Full text link
    We propose a distributed algorithm for solving the optimization problem Basis Pursuit (BP). BP finds the least L1-norm solution of the underdetermined linear system Ax = b and is used, for example, in compressed sensing for reconstruction. Our algorithm solves BP on a distributed platform such as a sensor network, and is designed to minimize the communication between nodes. The algorithm only requires the network to be connected, has no notion of a central processing node, and no node has access to the entire matrix A at any time. We consider two scenarios in which either the columns or the rows of A are distributed among the compute nodes. Our algorithm, named D-ADMM, is a decentralized implementation of the alternating direction method of multipliers. We show through numerical simulation that our algorithm requires considerably less communications between the nodes than the state-of-the-art algorithms.Comment: Preprint of the journal version of the paper; IEEE Transactions on Signal Processing, Vol. 60, Issue 4, April, 201

    A unified algorithmic approach to distributed optimization

    Full text link
    We address general optimization problems formulated on networks. Each node in the network has a function, and the goal is to find a vec-tor x ∈ Rn that minimizes the sum of all the functions. We assume that each function depends on a set of components of x, not neces-sarily on all of them. This creates additional structure in the prob-lem, which can be captured by the classification scheme we develop. This scheme not only to enables us to design an algorithm that solves very general distributed optimization problems, but also allows us to categorize prior algorithms and applications. Our general-purpose algorithm shows a performance superior to prior algorithms, includ-ing algorithms that are application-specific. Index Terms — Distributed optimization, sensor networks 1

    Processor Allocation for Optimistic Parallelization of Irregular Programs

    Full text link
    Optimistic parallelization is a promising approach for the parallelization of irregular algorithms: potentially interfering tasks are launched dynamically, and the runtime system detects conflicts between concurrent activities, aborting and rolling back conflicting tasks. However, parallelism in irregular algorithms is very complex. In a regular algorithm like dense matrix multiplication, the amount of parallelism can usually be expressed as a function of the problem size, so it is reasonably straightforward to determine how many processors should be allocated to execute a regular algorithm of a certain size (this is called the processor allocation problem). In contrast, parallelism in irregular algorithms can be a function of input parameters, and the amount of parallelism can vary dramatically during the execution of the irregular algorithm. Therefore, the processor allocation problem for irregular algorithms is very difficult. In this paper, we describe the first systematic strategy for addressing this problem. Our approach is based on a construct called the conflict graph, which (i) provides insight into the amount of parallelism that can be extracted from an irregular algorithm, and (ii) can be used to address the processor allocation problem for irregular algorithms. We show that this problem is related to a generalization of the unfriendly seating problem and, by extending Tur\'an's theorem, we obtain a worst-case class of problems for optimistic parallelization, which we use to derive a lower bound on the exploitable parallelism. Finally, using some theoretically derived properties and some experimental facts, we design a quick and stable control strategy for solving the processor allocation problem heuristically.Comment: 12 pages, 3 figures, extended version of SPAA 2011 brief announcemen

    Characterising the temporal evolution of fixation in human post mortem brain via linear relaxometry modelling – a marker of cross-linking?

    Get PDF
    MRI-based biophysical models are typically validated by comparison to ex-vivo histology of fixed tissue. The fixation process itself and the accompanied autolysis processes strongly modify tissue composition, and lead to MR signal changes, making the validation of biophysical models for in vivo MRI particularly challenging. To better understand the temporal evolution of the fixation process within the whole brain and its influence on MRI parameters, we monitor the temporal evolution of the fixation process of a whole human post-mortem brain using the linear relaxometry model across 15 time-points comprised of one unfixed, in-situ MRI scan and 14 ex-vivo MRI scans at different stages of the fixation process (days 1-93)

    Distributed ADMM for model predictive control and congestion control

    Full text link
    • …
    corecore