19 research outputs found

    Linear Convergence of Primal-Dual Gradient Methods and their Performance in Distributed Optimization

    Full text link
    In this work, we revisit a classical incremental implementation of the primal-descent dual-ascent gradient method used for the solution of equality constrained optimization problems. We provide a short proof that establishes the linear (exponential) convergence of the algorithm for smooth strongly-convex cost functions and study its relation to the non-incremental implementation. We also study the effect of the augmented Lagrangian penalty term on the performance of distributed optimization algorithms for the minimization of aggregate cost functions over multi-agent networks

    On Robustness Analysis of a Dynamic Average Consensus Algorithm to Communication Delay

    Full text link
    This paper studies the robustness of a dynamic average consensus algorithm to communication delay over strongly connected and weight-balanced (SCWB) digraphs. Under delay-free communication, the algorithm of interest achieves a practical asymptotic tracking of the dynamic average of the time-varying agents' reference signals. For this algorithm, in both its continuous-time and discrete-time implementations, we characterize the admissible communication delay range and study the effect of the delay on the rate of convergence and the tracking error bound. Our study also includes establishing a relationship between the admissible delay bound and the maximum degree of the SCWB digraphs. We also show that for delays in the admissible bound, for static signals the algorithms achieve perfect tracking. Moreover, when the interaction topology is a connected undirected graph, we show that the discrete-time implementation is guaranteed to tolerate at least one step delay. Simulations demonstrate our results
    corecore