12,264 research outputs found

    Distributed Constrained Recursive Nonlinear Least-Squares Estimation: Algorithms and Asymptotics

    Full text link
    This paper focuses on the problem of recursive nonlinear least squares parameter estimation in multi-agent networks, in which the individual agents observe sequentially over time an independent and identically distributed (i.i.d.) time-series consisting of a nonlinear function of the true but unknown parameter corrupted by noise. A distributed recursive estimator of the \emph{consensus} + \emph{innovations} type, namely CIWNLS\mathcal{CIWNLS}, is proposed, in which the agents update their parameter estimates at each observation sampling epoch in a collaborative way by simultaneously processing the latest locally sensed information~(\emph{innovations}) and the parameter estimates from other agents~(\emph{consensus}) in the local neighborhood conforming to a pre-specified inter-agent communication topology. Under rather weak conditions on the connectivity of the inter-agent communication and a \emph{global observability} criterion, it is shown that at every network agent, the proposed algorithm leads to consistent parameter estimates. Furthermore, under standard smoothness assumptions on the local observation functions, the distributed estimator is shown to yield order-optimal convergence rates, i.e., as far as the order of pathwise convergence is concerned, the local parameter estimates at each agent are as good as the optimal centralized nonlinear least squares estimator which would require access to all the observations across all the agents at all times. In order to benchmark the performance of the proposed distributed CIWNLS\mathcal{CIWNLS} estimator with that of the centralized nonlinear least squares estimator, the asymptotic normality of the estimate sequence is established and the asymptotic covariance of the distributed estimator is evaluated. Finally, simulation results are presented which illustrate and verify the analytical findings.Comment: 28 pages. Initial Submission: Feb. 2016, Revised: July 2016, Accepted: September 2016, To appear in IEEE Transactions on Signal and Information Processing over Networks: Special Issue on Inference and Learning over Network

    Distributed Big-Data Optimization via Block Communications

    Get PDF
    We study distributed multi-agent large-scale optimization problems, wherein the cost function is composed of a smooth possibly nonconvex sum-utility plus a DC (Difference-of-Convex) regularizer. We consider the scenario where the dimension of the optimization variables is so large that optimizing and/or transmitting the entire set of variables could cause unaffordable computation and communication overhead. To address this issue, we propose the first distributed algorithm whereby agents optimize and communicate only a portion of their local variables. The scheme hinges on successive convex approximation (SCA) to handle the nonconvexity of the objective function, coupled with a novel block-signal tracking scheme, aiming at locally estimating the average of the agents' gradients. Asymptotic convergence to stationary solutions of the nonconvex problem is established. Numerical results on a sparse regression problem show the effectiveness of the proposed algorithm and the impact of the block size on its practical convergence speed and communication cost

    On the genericity properties in networked estimation: Topology design and sensor placement

    Full text link
    In this paper, we consider networked estimation of linear, discrete-time dynamical systems monitored by a network of agents. In order to minimize the power requirement at the (possibly, battery-operated) agents, we require that the agents can exchange information with their neighbors only \emph{once per dynamical system time-step}; in contrast to consensus-based estimation where the agents exchange information until they reach a consensus. It can be verified that with this restriction on information exchange, measurement fusion alone results in an unbounded estimation error at every such agent that does not have an observable set of measurements in its neighborhood. To over come this challenge, state-estimate fusion has been proposed to recover the system observability. However, we show that adding state-estimate fusion may not recover observability when the system matrix is structured-rank (SS-rank) deficient. In this context, we characterize the state-estimate fusion and measurement fusion under both full SS-rank and SS-rank deficient system matrices.Comment: submitted for IEEE journal publicatio

    Accelerated Consensus via Min-Sum Splitting

    Full text link
    We apply the Min-Sum message-passing protocol to solve the consensus problem in distributed optimization. We show that while the ordinary Min-Sum algorithm does not converge, a modified version of it known as Splitting yields convergence to the problem solution. We prove that a proper choice of the tuning parameters allows Min-Sum Splitting to yield subdiffusive accelerated convergence rates, matching the rates obtained by shift-register methods. The acceleration scheme embodied by Min-Sum Splitting for the consensus problem bears similarities with lifted Markov chains techniques and with multi-step first order methods in convex optimization

    Distributed Big-Data Optimization via Block-Iterative Convexification and Averaging

    Full text link
    In this paper, we study distributed big-data nonconvex optimization in multi-agent networks. We consider the (constrained) minimization of the sum of a smooth (possibly) nonconvex function, i.e., the agents' sum-utility, plus a convex (possibly) nonsmooth regularizer. Our interest is in big-data problems wherein there is a large number of variables to optimize. If treated by means of standard distributed optimization algorithms, these large-scale problems may be intractable, due to the prohibitive local computation and communication burden at each node. We propose a novel distributed solution method whereby at each iteration agents optimize and then communicate (in an uncoordinated fashion) only a subset of their decision variables. To deal with non-convexity of the cost function, the novel scheme hinges on Successive Convex Approximation (SCA) techniques coupled with i) a tracking mechanism instrumental to locally estimate gradient averages; and ii) a novel block-wise consensus-based protocol to perform local block-averaging operations and gradient tacking. Asymptotic convergence to stationary solutions of the nonconvex problem is established. Finally, numerical results show the effectiveness of the proposed algorithm and highlight how the block dimension impacts on the communication overhead and practical convergence speed
    • …
    corecore