60 research outputs found

    Asynchronous Decentralized Optimization in Directed Networks

    Full text link
    A popular asynchronous protocol for decentralized optimization is randomized gossip where a pair of neighbors concurrently update via pairwise averaging. In practice, this creates deadlocks and is vulnerable to information delays. It can also be problematic if a node is unable to response or has only access to its private-preserved local dataset. To address these issues simultaneously, this paper proposes an asynchronous decentralized algorithm, i.e. APPG, with {\em directed} communication where each node updates {\em asynchronously} and independently of any other node. If local functions are strongly-convex with Lipschitz-continuous gradients, each node of APPG converges to the same optimal solution at a rate of O(λk)O(\lambda^k), where λ∈(0,1)\lambda\in(0,1) and the virtual counter kk increases by 1 no matter on which node updates. The superior performance of APPG is validated on a logistic regression problem against state-of-the-art methods in terms of linear speedup and system implementations

    Cooperative Source Seeking via Networked Multi-vehicle Systems

    Full text link
    This paper studies the cooperative source seeking problem via a networked multi-vehicle system. In contrast to existing literature, the multi-vehicle system is controlled to the source position that maximizes aggregated multiple unknown scalar fields and each sensor-enabled vehicle only samples measurements of one scalar field. Thus, a single vehicle is unable to localize the source and has to cooperate with its neighboring vehicles. By jointly exploiting the ideas of the consensus algorithm and the stochastic extremum seeking (ES), this paper proposes novel distributed stochastic ES controllers, which are gradient-free and do not need any absolute information, such that the multi-vehicle system simultaneously approaches the source position. The effectiveness of the proposed controllers is proved for quadratic scalar fields. Finally, illustrative examples are included to validate the theoretical results

    Distributed Algorithms for Robust Convex Optimization via the Scenario Approach

    Full text link
    This paper proposes distributed algorithms to solve robust convex optimization (RCO) when the constraints are affected by nonlinear uncertainty. We adopt a scenario approach by randomly sampling the uncertainty set. To facilitate the computational task, instead of using a single centralized processor to obtain a "global solution" of the scenario problem (SP), we resort to {\it multiple interconnected processors} that are distributed among different nodes of a network to simultaneously solve the SP. Then, we propose a primal-dual sub-gradient algorithm and a random projection algorithm to distributedly solve the SP over undirected and directed graphs, respectively. Both algorithms are given in an explicit recursive form with simple iterations, which are especially suited for processors with limited computational capability. We show that, if the underlying graph is strongly connected, each node asymptotically computes a common optimal solution to the SP with a convergence rate O(1/(∑t=1kζt))O(1/(\sum_{t=1}^k\zeta^t)) where {ζt}\{\zeta^t\} is a sequence of appropriately decreasing stepsizes. That is, the RCO is effectively solved in a distributed way. The relations with the existing literature on robust convex programs are thoroughly discussed and an example of robust system identification is included to validate the effectiveness of our distributed algorithms.Comment: 15 pages, 4 figure

    Distributed Algorithms for Computation of Centrality Measures in Complex Networks

    Full text link
    This paper is concerned with distributed computation of several commonly used centrality measures in complex networks. In particular, we propose deterministic algorithms, which converge in finite time, for the distributed computation of the degree, closeness and betweenness centrality measures in directed graphs. Regarding eigenvector centrality, we consider the PageRank problem as its typical variant, and design distributed randomized algorithms to compute PageRank for both fixed and time-varying graphs. A key feature of the proposed algorithms is that they do not require to know the network size, which can be simultaneously estimated at every node, and that they are clock-free. To address the PageRank problem of time-varying graphs, we introduce the novel concept of persistent graph, which eliminates the effect of spamming nodes. Moreover, we prove that these algorithms converge almost surely and in the sense of LpL^p. Finally, the effectiveness of the proposed algorithms is illustrated via extensive simulations using a classical benchmark.Comment: 15 pages, 8 figures,(conditionally accepted), IEEE Transactions on Automatic Control, 201

    Distributed Discrete-time Optimization in Multi-agent Networks Using only Sign of Relative State

    Full text link
    This paper proposes distributed discrete-time algorithms to cooperatively solve an additive cost optimization problem in multi-agent networks. The striking feature lies in the use of only the sign of relative state information between neighbors, which substantially differentiates our algorithms from others in the existing literature. We first interpret the proposed algorithms in terms of the penalty method in optimization theory and then perform non-asymptotic analysis to study convergence for static network graphs. Compared with the celebrated distributed subgradient algorithms, which however use the exact relative state information, the convergence speed is essentially not affected by the loss of information. We also study how introducing noise into the relative state information and randomly activated graphs affect the performance of our algorithms. Finally, we validate the theoretical results on a class of distributed quantile regression problems.Comment: Part of this work has been presented in American Control Conference (ACC) 2018, first version posted on arxiv on Sep. 2017, IEEE Transactions on Automatic Control, 201

    How to Stop Consensus Algorithms, locally?

    Full text link
    This paper studies problems on locally stopping distributed consensus algorithms over networks where each node updates its state by interacting with its neighbors and decides by itself whether certain level of agreement has been achieved among nodes. Since an individual node is unable to access the states of those beyond its neighbors, this problem becomes challenging. In this work, we first define the stopping problem for generic distributed algorithms. Then, a distributed algorithm is explicitly provided for each node to stop consensus updating by exploring the relationship between the so-called local and global consensus. Finally, we show both in theory and simulation that its effectiveness depends both on the network size and the structure

    Distributed Adaptive Newton Methods with Globally Superlinear Convergence

    Full text link
    This paper considers the distributed optimization problem over a network where the global objective is to optimize a sum of local functions using only local computation and communication. Since the existing algorithms either adopt a linear consensus mechanism, which converges at best linearly, or assume that each node starts sufficiently close to an optimal solution, they cannot achieve globally superlinear convergence. To break through the linear consensus rate, we propose a finite-time set-consensus method, and then incorporate it into Polyak's adaptive Newton method, leading to our distributed adaptive Newton algorithm (DAN). To avoid transmitting local Hessians, we adopt a low-rank approximation idea to compress the Hessian and design a communication-efficient DAN-LA. Then, the size of transmitted messages in DAN-LA is reduced to O(p)O(p) per iteration, where pp is the dimension of decision vectors and is the same as the first-order methods. We show that DAN and DAN-LA can globally achieve quadratic and superlinear convergence rates, respectively. Numerical experiments on logistic regression problems are finally conducted to show the advantages over existing methods.Comment: Submitted to IEEE Transactions on Automatic Control. 14 pages, 4 figure

    Wasserstein Distributionally Robust Shortest Path Problem

    Full text link
    This paper proposes a data-driven distributionally robust shortest path (DRSP) model where the distribution of the travel time in the transportation network can only be partially observed through a finite number of samples. Specifically, we aim to find an optimal path to minimize the worst-case α\alpha-reliable mean-excess travel time (METT) over a Wasserstein ball, which is centered at the empirical distribution of the sample dataset and the ball radius quantifies the level of its confidence. In sharp contrast to the existing DRSP models, our model is equivalently reformulated as a tractable mixed 0-1 convex problem, e.g., 0-1 linear program or 0-1 second-order cone program. Moreover, we also explicitly derive the distribution achieving the worst-case METT by simply perturbing each sample. Experiments demonstrate the advantages of our DRSP model in terms of the out-of-sample performance and computational complexity. Finally, our DRSP model is easily extended to solve the DR bi-criteria shortest path problem and the minimum cost flow problem

    Depth Control of Model-Free AUVs via Reinforcement Learning

    Full text link
    In this paper, we consider depth control problems of an autonomous underwater vehicle (AUV) for tracking the desired depth trajectories. Due to the unknown dynamical model of the AUV, the problems cannot be solved by most of model-based controllers. To this purpose, we formulate the depth control problems of the AUV as continuous-state, continuous-action Markov decision processes (MDPs) under unknown transition probabilities. Based on deterministic policy gradient (DPG) and neural network approximation, we propose a model-free reinforcement learning (RL) algorithm that learns a state-feedback controller from sampled trajectories of the AUV. To improve the performance of the RL algorithm, we further propose a batch-learning scheme through replaying previous prioritized trajectories. We illustrate with simulations that our model-free method is even comparable to the model-based controllers as LQI and NMPC. Moreover, we validate the effectiveness of the proposed RL algorithm on a seafloor data set sampled from the South China Sea

    Second-order Conic Programming Approach for Wasserstein Distributionally Robust Two-stage Linear Programs

    Full text link
    This paper proposes a second-order conic programming (SOCP) approach to solve distributionally robust two-stage stochastic linear programs over 1-Wasserstein balls. We start from the case with distribution uncertainty only in the objective function and exactly reformulate it as an SOCP problem. Then, we study the case with distribution uncertainty only in constraints, and show that such a robust program is generally NP-hard as it involves a norm maximization problem over a polyhedron. However, it is reduced to an SOCP problem if the extreme points of the polyhedron are given as a prior. This motivates to design a constraint generation algorithm with provable convergence to approximately solve the NP-hard problem. In sharp contrast to the exiting literature, the distribution achieving the worst-case cost is given as an "empirical" distribution by simply perturbing each sample for both cases. Finally, experiments illustrate the advantages of the proposed model in terms of the out-of-sample performance and the computational complexity
    • …
    corecore