5,880 research outputs found

    A Distributed Algorithm for Least Square Solutions of Linear Equations

    Full text link
    A distributed discrete-time algorithm is proposed for multi-agent networks to achieve a common least squares solution of a group of linear equations, in which each agent only knows some of the equations and is only able to receive information from its nearby neighbors. For fixed, connected, and undirected networks, the proposed discrete-time algorithm results in each agents solution estimate to converging exponentially fast to the same least squares solution. Moreover, the convergence does not require careful choices of time-varying small step sizes

    Finite-Time Distributed Linear Equation Solver for Minimum l1l_1 Norm Solutions

    Full text link
    This paper proposes distributed algorithms for multi-agent networks to achieve a solution in finite time to a linear equation Ax=bAx=b where AA has full row rank, and with the minimum l1l_1-norm in the underdetermined case (where AA has more columns than rows). The underlying network is assumed to be undirected and fixed, and an analytical proof is provided for the proposed algorithm to drive all agents' individual states to converge to a common value, viz a solution of Ax=bAx=b, which is the minimum l1l_1-norm solution in the underdetermined case. Numerical simulations are also provided as validation of the proposed algorithms

    A Distributed Algorithm for Solving Linear Algebraic Equations Over Random Networks

    Full text link
    In this paper, we consider the problem of solving linear algebraic equations of the form Ax=bAx=b among multi agents which seek a solution by using local information in presence of random communication topologies. The equation is solved by mm agents where each agent only knows a subset of rows of the partitioned matrix [A,b][A,b]. We formulate the problem such that this formulation does not need the distribution of random interconnection graphs. Therefore, this framework includes asynchronous updates or unreliable communication protocols without B-connectivity assumption. We apply the random Krasnoselskii-Mann iterative algorithm which converges almost surely and in mean square to a solution of the problem for any matrices AA and bb and any initial conditions of agents' states. We demonestrate that the limit point to which the agents' states converge is determined by the unique solution of a convex optimization problem regardless of the distribution of random communication graphs. Eventually, we show by two numerical examples that the rate of convergence of the algorithm cannot be guaranteed.Comment: 10 pages, 2 figures, a preliminary version of this paper appears without proofs in the Proceedings of the 57th IEEE Conference on Decision and Control, Miami Beach, FL, USA, December 17-19, 201

    Dual Set Membership Filter with Minimizing Nonlinear Transformation of Ellipsoid

    Full text link
    In this paper, we propose a dual set membership filter for nonlinear dynamic systems with unknown but bounded noises, and it has three distinctive properties. Firstly, the nonlinear system is translated into the linear system by leveraging a semi-infinite programming, rather than linearizing the nonlinear function. In fact, the semi-infinite programming is to find an ellipsoid bounding the nonlinear transformation of an ellipsoid, which aims to compute a tight ellipsoid to cover the state. Secondly, the duality result of the semi-infinite programming is derived by a rigorous analysis, then a first order Frank-Wolfe method is developed to efficiently solve it with a lower computation complexity. Thirdly, the proposed filter can take advantage of the linear set membership filter framework and can work on-line without solving the semidefinite programming problem. Furthermore, we apply the dual set membership filter to a typical scenario of mobile robot localization. Finally, two illustrative examples in the simulations show the advantages and effectiveness of the dual set membership filter.Comment: 26 pages, 9 figure

    On Reconstructability of Quadratic Utility Functions from the Iterations in Gradient Methods

    Full text link
    In this paper, we consider a scenario where an eavesdropper can read the content of messages transmitted over a network. The nodes in the network are running a gradient algorithm to optimize a quadratic utility function where such a utility optimization is a part of a decision making process by an administrator. We are interested in understanding the conditions under which the eavesdropper can reconstruct the utility function or a scaled version of it and, as a result, gain insight into the decision-making process. We establish that if the parameter of the gradient algorithm, i.e.,~the step size, is chosen appropriately, the task of reconstruction becomes practically impossible for a class of Bayesian filters with uniform priors. We establish what step-size rules should be employed to ensure this

    A Fast Converging Distributed Solver for Linear Systems with Generalised Diagonal Dominance

    Full text link
    This paper proposes a new distributed algorithm for solving linear systems associated with a sparse graph under a generalised diagonal dominance assumption. The algorithm runs iteratively on each node of the graph, with low complexities on local information exchange between neighbouring nodes, local computation and local storage. For an acyclic graph under the condition of diagonal dominance, the algorithm is shown to converge to the correct solution in a finite number of iterations, equalling the diameter of the graph. For a loopy graph, the algorithm is shown to converge to the correct solution asymptotically. Simulations verify that the proposed algorithm significantly outperforms the classical Jacobi method and a recent distributed linear system solver based on average consensus and orthogonal projection.Comment: 10 page

    Solving Linear Equations with Separable Problem Data over Directed Networks

    Full text link
    This paper deals with linear algebraic equations where the global coefficient matrix and constant vector are given respectively, by the summation of the coefficient matrices and constant vectors of the individual agents. Our approach is based on reformulating the original problem as an unconstrained optimization. Based on this exact reformulation, we first provide a gradient-based, centralized algorithm which serves as a reference for the ensuing design of distributed algorithms. We propose two sets of exponentially stable continuous-time distributed algorithms that do not require the individual agent matrices to be invertible, and are based on estimating non-distributed terms in the centralized algorithm using dynamic average consensus. The first algorithm works for time-varying weight-balanced directed networks, and the second algorithm works for general directed networks for which the communication graphs might not be balanced. Numerical simulations illustrate our results.Comment: 6 pages, 2 figure

    Variational perturbation and extended Plefka approaches to dynamics on random networks: the case of the kinetic Ising model

    Full text link
    We describe and analyze some novel approaches for studying the dynamics of Ising spin glass models. We first briefly consider the variational approach based on minimizing the Kullback-Leibler divergence between independent trajectories and the real ones and note that this approach only coincides with the mean field equations from the saddle point approximation to the generating functional when the dynamics is defined through a logistic link function, which is the case for the kinetic Ising model with parallel update. We then spend the rest of the paper developing two ways of going beyond the saddle point approximation to the generating functional. In the first one, we develop a variational perturbative approximation to the generating functional by expanding the action around a quadratic function of the local fields and conjugate local fields whose parameters are optimized. We derive analytical expressions for the optimal parameters and show that when the optimization is suitably restricted, we recover the mean field equations that are exact for the fully asymmetric random couplings (M\'ezard and Sakellariou, 2011). However, without this restriction the results are different. We also describe an extended Plefka expansion in which in addition to the magnetization, we also fix the correlation and response functions. Finally, we numerically study the performance of these approximations for Sherrington-Kirkpatrick type couplings for various coupling strengths, degrees of coupling symmetry and external fields. We show that the dynamical equations derived from the extended Plefka expansion outperform the others in all regimes, although it is computationally more demanding. The unconstrained variational approach does not perform well in the small coupling regime, while it approaches dynamical TAP equations of (Roudi and Hertz, 2011) for strong couplings

    Network Optimization via Smooth Exact Penalty Functions Enabled by Distributed Gradient Computation

    Full text link
    This paper proposes a distributed algorithm for a network of agents to solve an optimization problem with separable objective function and locally coupled constraints. Our strategy is based on reformulating the original constrained problem as the unconstrained optimization of a smooth (continuously differentiable) exact penalty function. Computing the gradient of this penalty function in a distributed way is challenging even under the separability assumptions on the original optimization problem. Our technical approach shows that the distributed computation problem for the gradient can be formulated as a system of linear algebraic equations defined by separable problem data. To solve it, we design an exponentially fast, input-to-state stable distributed algorithm that does not require the individual agent matrices to be invertible. We employ this strategy to compute the gradient of the penalty function at the current network state. Our distributed algorithmic solver for the original constrained optimization problem interconnects this estimation with the prescription of having the agents follow the resulting direction. Numerical simulations illustrate the convergence and robustness properties of the proposed algorithm.Comment: 12 pages, 3 figure

    Network Design for Controllability Metrics

    Full text link
    In this paper, we consider the problem of tuning the edge weights of a networked system described by linear time-invariant dynamics. We assume that the topology of the underlying network is fixed and that the set of feasible edge weights is a given polytope. In this setting, we first consider a feasibility problem consisting of tuning the edge weights such that certain controllability properties are satisfied. The particular controllability properties under consideration are (i) a lower bound on the smallest eigenvalue of the controllability Gramian, which is related to the worst-case energy needed to control the system, and (ii) an upper bound on the trace of the Gramian inverse, which is related to the average control energy. In both cases, the edge-tuning problem can be stated as a feasibility problem involving bilinear matrix equalities, which we approach using a sequence of convex relaxations. Furthermore, we also address a design problem consisting of finding edge weights able to satisfy the aforementioned controllability constraints while seeking to minimize a cost function of the edge weights, which we assume to be convex. In particular, we consider a sparsity-promoting cost function aiming to penalize the number of edges whose weights are modified. Finally, we verify our results with numerical simulations over many random network realizations as well as with an IEEE 14-bus power system topology
    • …
    corecore