4 research outputs found

    Source Coding Optimization for Distributed Average Consensus

    Full text link
    Consensus is a common method for computing a function of the data distributed among the nodes of a network. Of particular interest is distributed average consensus, whereby the nodes iteratively compute the sample average of the data stored at all the nodes of the network using only near-neighbor communications. In real-world scenarios, these communications must undergo quantization, which introduces distortion to the internode messages. In this thesis, a model for the evolution of the network state statistics at each iteration is developed under the assumptions of Gaussian data and additive quantization error. It is shown that minimization of the communication load in terms of aggregate source coding rate can be posed as a generalized geometric program, for which an equivalent convex optimization can efficiently solve for the global minimum. Optimization procedures are developed for rate-distortion-optimal vector quantization, uniform entropy-coded scalar quantization, and fixed-rate uniform quantization. Numerical results demonstrate the performance of these approaches. For small numbers of iterations, the fixed-rate optimizations are verified using exhaustive search. Comparison to the prior art suggests competitive performance under certain circumstances but strongly motivates the incorporation of more sophisticated coding strategies, such as differential, predictive, or Wyner-Ziv coding.Comment: Master's Thesis, Electrical Engineering, North Carolina State Universit

    Splitting Methods for Distributed Optimization and Control

    Get PDF
    This thesis contributes towards the design and analysis of fast and distributed optimization algorithms based on splitting techniques, such as proximal gradient methods or alternation minimization algorithms, with the application of solving model predictive control (MPC) problems. The first part of the thesis focuses on developing an efficient algorithm based on the fast alternating minimization algorithm to solve MPC problems with polytopic and second-order cone constraints. Due to the requirement of bounding the online computation time in the context of real-time MPC, complexity bounds on the number of iterations to achieve a certain accuracy are derived. In addition, a discussion of the computation of the complexity bounds is provided. To further improve the convergence speed of the proposed algorithm, an o-line pre-conditioning method is presented for MPC problems with polyhedral and ellipsoidal constraints. The inexact alternating minimization algorithm, as well as its accelerated variant, is proposed in the second part of the thesis. Different from standard algorithms, inexact methods allow for errors in the update at each iteration. Complexity upper-bounds on the number of iterations in the presence of errors are derived. By employing the complexity bounds, sufficient conditions on the errors, which guarantee the convergence of the algorithms, are presented. The proposed algorithms are applied for solving distributed optimization problems in the presence of local computation and communication errors, with an emphasis on distributed MPC applications. The convergence properties of the algorithms for this special case are analysed. Motivated by the complexity upper-bounds of the inexact proximal gradient method, two distributed optimization algorithms with an iteratively refining quantization design are proposed for solving distributed optimization problems with a limited communication data-rate. We show that if the parameters of the quantizers satisfy certain conditions, then the quantization error decreases linearly, while at each iteration only a fixed number of bits is transmitted, and the convergence of the distributed algorithms is guaranteed. The proposed methods are further extended to distributed optimization problems with time-varying parameters

    A consensus algorithm for networks with process noise and quantization error

    No full text
    In this paper we address the problem of quantized consensus where process noise or external inputs corrupt the state of each agent at each iteration. We propose a quantized consensus algorithm with progressive quantization, where the quantization interval changes in length at each iteration by a pre-specified value. We derive conditions on the design parameters of the algorithm to guarantee ultimate boundedness of the deviation from the average of each agent. Moreover, we determine explicitly the bounds of the consensus error under the assumption that the process disturbances are ultimately bounded within known bounds. A numerical example of cooperative path-following of a network of single integrators illustrates the performance of the proposed algorithm. © 2015 IEEE
    corecore