6,535 research outputs found

    Joint Spectral Radius and Path-Complete Graph Lyapunov Functions

    Full text link
    We introduce the framework of path-complete graph Lyapunov functions for approximation of the joint spectral radius. The approach is based on the analysis of the underlying switched system via inequalities imposed among multiple Lyapunov functions associated to a labeled directed graph. Inspired by concepts in automata theory and symbolic dynamics, we define a class of graphs called path-complete graphs, and show that any such graph gives rise to a method for proving stability of the switched system. This enables us to derive several asymptotically tight hierarchies of semidefinite programming relaxations that unify and generalize many existing techniques such as common quadratic, common sum of squares, and maximum/minimum-of-quadratics Lyapunov functions. We compare the quality of approximation obtained by certain classes of path-complete graphs including a family of dual graphs and all path-complete graphs with two nodes on an alphabet of two matrices. We provide approximation guarantees for several families of path-complete graphs, such as the De Bruijn graphs, establishing as a byproduct a constructive converse Lyapunov theorem for maximum/minimum-of-quadratics Lyapunov functions.Comment: To appear in SIAM Journal on Control and Optimization. Version 2 has gone through two major rounds of revision. In particular, a section on the performance of our algorithm on application-motivated problems has been added and a more comprehensive literature review is presente

    Distributed Design for Decentralized Control using Chordal Decomposition and ADMM

    Full text link
    We propose a distributed design method for decentralized control by exploiting the underlying sparsity properties of the problem. Our method is based on chordal decomposition of sparse block matrices and the alternating direction method of multipliers (ADMM). We first apply a classical parameterization technique to restrict the optimal decentralized control into a convex problem that inherits the sparsity pattern of the original problem. The parameterization relies on a notion of strongly decentralized stabilization, and sufficient conditions are discussed to guarantee this notion. Then, chordal decomposition allows us to decompose the convex restriction into a problem with partially coupled constraints, and the framework of ADMM enables us to solve the decomposed problem in a distributed fashion. Consequently, the subsystems only need to share their model data with their direct neighbours, not needing a central computation. Numerical experiments demonstrate the effectiveness of the proposed method.Comment: 11 pages, 8 figures. Accepted for publication in the IEEE Transactions on Control of Network System

    The Power of Online Learning in Stochastic Network Optimization

    Full text link
    In this paper, we investigate the power of online learning in stochastic network optimization with unknown system statistics {\it a priori}. We are interested in understanding how information and learning can be efficiently incorporated into system control techniques, and what are the fundamental benefits of doing so. We propose two \emph{Online Learning-Aided Control} techniques, OLAC\mathtt{OLAC} and OLAC2\mathtt{OLAC2}, that explicitly utilize the past system information in current system control via a learning procedure called \emph{dual learning}. We prove strong performance guarantees of the proposed algorithms: OLAC\mathtt{OLAC} and OLAC2\mathtt{OLAC2} achieve the near-optimal [O(ϵ),O([log(1/ϵ)]2)][O(\epsilon), O([\log(1/\epsilon)]^2)] utility-delay tradeoff and OLAC2\mathtt{OLAC2} possesses an O(ϵ2/3)O(\epsilon^{-2/3}) convergence time. OLAC\mathtt{OLAC} and OLAC2\mathtt{OLAC2} are probably the first algorithms that simultaneously possess explicit near-optimal delay guarantee and sub-linear convergence time. Simulation results also confirm the superior performance of the proposed algorithms in practice. To the best of our knowledge, our attempt is the first to explicitly incorporate online learning into stochastic network optimization and to demonstrate its power in both theory and practice

    The Power of Online Learning in Stochastic Network Optimization

    Get PDF
    In this paper, we investigate the power of online learning in stochastic network optimization with unknown system statistics {\it a priori}. We are interested in understanding how information and learning can be efficiently incorporated into system control techniques, and what are the fundamental benefits of doing so. We propose two \emph{Online Learning-Aided Control} techniques, OLAC\mathtt{OLAC} and OLAC2\mathtt{OLAC2}, that explicitly utilize the past system information in current system control via a learning procedure called \emph{dual learning}. We prove strong performance guarantees of the proposed algorithms: OLAC\mathtt{OLAC} and OLAC2\mathtt{OLAC2} achieve the near-optimal [O(ϵ),O([log(1/ϵ)]2)][O(\epsilon), O([\log(1/\epsilon)]^2)] utility-delay tradeoff and OLAC2\mathtt{OLAC2} possesses an O(ϵ2/3)O(\epsilon^{-2/3}) convergence time. OLAC\mathtt{OLAC} and OLAC2\mathtt{OLAC2} are probably the first algorithms that simultaneously possess explicit near-optimal delay guarantee and sub-linear convergence time. Simulation results also confirm the superior performance of the proposed algorithms in practice. To the best of our knowledge, our attempt is the first to explicitly incorporate online learning into stochastic network optimization and to demonstrate its power in both theory and practice
    corecore