353 research outputs found

    D-SVM over Networked Systems with Non-Ideal Linking Conditions

    Full text link
    This paper considers distributed optimization algorithms, with application in binary classification via distributed support-vector-machines (D-SVM) over multi-agent networks subject to some link nonlinearities. The agents solve a consensus-constraint distributed optimization cooperatively via continuous-time dynamics, while the links are subject to strongly sign-preserving odd nonlinear conditions. Logarithmic quantization and clipping (saturation) are two examples of such nonlinearities. In contrast to existing literature that mostly considers ideal links and perfect information exchange over linear channels, we show how general sector-bounded models affect the convergence to the optimizer (i.e., the SVM classifier) over dynamic balanced directed networks. In general, any odd sector-bounded nonlinear mapping can be applied to our dynamics. The main challenge is to show that the proposed system dynamics always have one zero eigenvalue (associated with the consensus) and the other eigenvalues all have negative real parts. This is done by recalling arguments from matrix perturbation theory. Then, the solution is shown to converge to the agreement state under certain conditions. For example, the gradient tracking (GT) step size is tighter than the linear case by factors related to the upper/lower sector bounds. To the best of our knowledge, no existing work in distributed optimization and learning literature considers non-ideal link conditions

    Adaptive Consensus: A network pruning approach for decentralized optimization

    Full text link
    We consider network-based decentralized optimization problems, where each node in the network possesses a local function and the objective is to collectively attain a consensus solution that minimizes the sum of all the local functions. A major challenge in decentralized optimization is the reliance on communication which remains a considerable bottleneck in many applications. To address this challenge, we propose an adaptive randomized communication-efficient algorithmic framework that reduces the volume of communication by periodically tracking the disagreement error and judiciously selecting the most influential and effective edges at each node for communication. Within this framework, we present two algorithms: Adaptive Consensus (AC) to solve the consensus problem and Adaptive Consensus based Gradient Tracking (AC-GT) to solve smooth strongly convex decentralized optimization problems. We establish strong theoretical convergence guarantees for the proposed algorithms and quantify their performance in terms of various algorithmic parameters under standard assumptions. Finally, numerical experiments showcase the effectiveness of the framework in significantly reducing the information exchange required to achieve a consensus solution.Comment: 35 pages, 3 figure

    CONSENSUS, PREDICTION AND OPTIMIZATION IN DIRECTED NETWORKS

    Get PDF
    This dissertation develops theory and algorithms for distributed consensus in multi-agent networks. The models considered are opinion dynamics models based on the well known DeGroot model. We study the following three related topics: consensus of networks with leaders, consensus prediction, and distributed optimization. First, we revisit the problem of agreement seeking in a weighted directed network in the presence of leaders. We develop new sufficient conditions that are weaker than existing conditions for guaranteeing consensus for both fixed and switching network topologies, emphasizing the importance not only of persistent connectivity between the leader and the followers but also of the strength of the connections. We then study the problem of a leader aiming to maximize its influence on the opinions of the network agents through targeted connection with a limited number of agents, possibly in the presence of another leader having a competing opinion. We reveal fundamental properties of leader influence defined in terms of either the transient behavior or the achieved steady state opinions of the network agents. In particular, not only is the degree of this influence a supermodular set function, but its continuous relaxation is also convex for any strongly connected directed network. These results pave the way for developing efficient approximation algorithms admitting certain quality certifications, which when combined can provide effective tools and better analysis for optimal influence spreading in large networks. Second, we introduce and investigate problems of network monitoring and consensus prediction. Here, an observer, without exact knowledge of the network, seeks to determine in the shortest possible time the asymptotic agreement value by monitoring a subset of the agents. We uncover a fundamental limit on the minimum required monitoring time for the case of a single observed node, and analyze the case of multiple observed nodes. We provide conditions for achieving the limit in the former case and develop algorithms toward achieving conjectured bounds in the latter through local observation and local computation. Third, we study a distributed optimization problem where a network of agents seeks to minimize the sum of the agents' individual objective functions while each agent may be associated with a separate local constraint. We develop new distributed algorithms for solving this problem. In these algorithms, consensus prediction is employed as a means to achieve fast convergence rates, possibly in finite time. An advantage of our distributed optimization algorithms is that they work under milder assumptions on the network weight matrix than are commonly assumed in the literature. Most distributed algorithms require undirected networks. Consensus-based algorithms can apply to directed networks under an assumption that the network weight matrix is doubly stochastic (i.e., both row stochastic and column stochastic), or in some recent literature only column stochastic. Our algorithms work for directed networks and only require row stochasticity, a mild assumption. Doubly stochastic or column stochastic weight matrices can be hard to arrange locally, especially in broadcast-based communication. We achieve the simplification to the row stochastic assumption through a distributed rescaling technique. Next, we develop a unified convergence analysis of a distributed projected subgradient algorithm and its variation that can be applied to both unconstrained and constrained problems without assuming boundedness or commonality of the local constraint sets
    • …
    corecore