12 research outputs found

    A Faithful Distributed Implementation of Dual Decomposition and Average Consensus Algorithms

    Full text link
    We consider large scale cost allocation problems and consensus seeking problems for multiple agents, in which agents are suggested to collaborate in a distributed algorithm to find a solution. If agents are strategic to minimize their own individual cost rather than the global social cost, they are endowed with an incentive not to follow the intended algorithm, unless the tax/subsidy mechanism is carefully designed. Inspired by the classical Vickrey-Clarke-Groves mechanism and more recent algorithmic mechanism design theory, we propose a tax mechanism that incentivises agents to faithfully implement the intended algorithm. In particular, a new notion of asymptotic incentive compatibility is introduced to characterize a desirable property of such class of mechanisms. The proposed class of tax mechanisms provides a sequence of mechanisms that gives agents a diminishing incentive to deviate from suggested algorithm.Comment: 8 page

    Distributed Weight Selection in Consensus Protocols by Schatten Norm Minimization

    Full text link
    In average consensus protocols, nodes in a network perform an iterative weighted average of their estimates and those of their neighbors. The protocol converges to the average of initial estimates of all nodes found in the network. The speed of convergence of average consensus protocols depends on the weights selected on links (to neighbors). We address in this paper how to select the weights in a given network in order to have a fast speed of convergence for these protocols. We approximate the problem of optimal weight selection by the minimization of the Schatten p-norm of a matrix with some constraints related to the connectivity of the underlying network. We then provide a totally distributed gradient method to solve the Schatten norm optimization problem. By tuning the parameter p in our proposed minimization, we can simply trade-off the quality of the solution (i.e. the speed of convergence) for communication/computation requirements (in terms of number of messages exchanged and volume of data processed). Simulation results show that our approach provides very good performance already for values of p that only needs limited information exchange. The weight optimization iterative procedure can also run in parallel with the consensus protocol and form a joint consensus-optimization procedure.Comment: N° RR-8078 (2012

    Fast-Convergent Dynamics for Distributed Resource Allocation Over Sparse Time-Varying Networks

    Full text link
    In this paper, distributed dynamics are deployed to solve resource allocation over time-varying multi-agent networks. The state of each agent represents the amount of resources used/produced at that agent while the total amount of resources is fixed. The idea is to optimally allocate the resources among the group of agents by reducing the total cost functions subject to fixed amount of total resources. The information of each agent is restricted to its own state and cost function and those of its immediate neighbors. This is motivated by distributed applications such as in mobile edge-computing, economic dispatch over smart grids, and multi-agent coverage control. The non-Lipschitz dynamics proposed in this work shows fast convergence as compared to the linear and some nonlinear solutions in the literature. Further, the multi-agent network connectivity is more relaxed in this paper. To be more specific, the proposed dynamics even reaches optimal solution over time-varying disconnected undirected networks as far as the union of these networks over some bounded non-overlapping time-intervals includes a spanning-tree. The proposed convergence analysis can be applied for similar 1st-order resource allocation nonlinear dynamics. We provide simulations to verify our results

    Newton-Raphson Consensus for Distributed Convex Optimization

    Full text link
    We address the problem of distributed uncon- strained convex optimization under separability assumptions, i.e., the framework where each agent of a network is endowed with a local private multidimensional convex cost, is subject to communication constraints, and wants to collaborate to compute the minimizer of the sum of the local costs. We propose a design methodology that combines average consensus algorithms and separation of time-scales ideas. This strategy is proved, under suitable hypotheses, to be globally convergent to the true minimizer. Intuitively, the procedure lets the agents distributedly compute and sequentially update an approximated Newton- Raphson direction by means of suitable average consensus ratios. We show with numerical simulations that the speed of convergence of this strategy is comparable with alternative optimization strategies such as the Alternating Direction Method of Multipliers. Finally, we propose some alternative strategies which trade-off communication and computational requirements with convergence speed.Comment: 18 pages, preprint with proof

    A novel decentralized economic operation in islanded AC microgrids

    Get PDF
    Droop schemes are usually applied to the control of distributed generators (DGs) in microgrids (MGs) to realize proportional power sharing. The objective might, however, not suit MGs well for economic reasons. Addressing that issue, this paper proposes an alternative droop scheme for reducing the total active generation costs (TAGC). Optimal economic operation, DGs’ capacity limitations and system stability are fully considered basing on DGs’ generation costs. The proposed scheme utilizes the frequency as a carrier to realize the decentralized economic operation of MGs without communication links. Moreover, a fitting method is applied to balance DGs’ synchronous operation and economy. The effectiveness and performance of the proposed scheme are verified through simulations and experiments

    Understanding A Class of Decentralized and Federated Optimization Algorithms: A Multi-Rate Feedback Control Perspective

    Full text link
    Distributed algorithms have been playing an increasingly important role in many applications such as machine learning, signal processing, and control. Significant research efforts have been devoted to developing and analyzing new algorithms for various applications. In this work, we provide a fresh perspective to understand, analyze, and design distributed optimization algorithms. Through the lens of multi-rate feedback control, we show that a wide class of distributed algorithms, including popular decentralized/federated schemes, can be viewed as discretizing a certain continuous-time feedback control system, possibly with multiple sampling rates, such as decentralized gradient descent, gradient tracking, and federated averaging. This key observation not only allows us to develop a generic framework to analyze the convergence of the entire algorithm class. More importantly, it also leads to an interesting way of designing new distributed algorithms. We develop the theory behind our framework and provide examples to highlight how the framework can be used in practice
    corecore