96,424 research outputs found
Distributed Constrained Recursive Nonlinear Least-Squares Estimation: Algorithms and Asymptotics
This paper focuses on the problem of recursive nonlinear least squares
parameter estimation in multi-agent networks, in which the individual agents
observe sequentially over time an independent and identically distributed
(i.i.d.) time-series consisting of a nonlinear function of the true but unknown
parameter corrupted by noise. A distributed recursive estimator of the
\emph{consensus} + \emph{innovations} type, namely , is
proposed, in which the agents update their parameter estimates at each
observation sampling epoch in a collaborative way by simultaneously processing
the latest locally sensed information~(\emph{innovations}) and the parameter
estimates from other agents~(\emph{consensus}) in the local neighborhood
conforming to a pre-specified inter-agent communication topology. Under rather
weak conditions on the connectivity of the inter-agent communication and a
\emph{global observability} criterion, it is shown that at every network agent,
the proposed algorithm leads to consistent parameter estimates. Furthermore,
under standard smoothness assumptions on the local observation functions, the
distributed estimator is shown to yield order-optimal convergence rates, i.e.,
as far as the order of pathwise convergence is concerned, the local parameter
estimates at each agent are as good as the optimal centralized nonlinear least
squares estimator which would require access to all the observations across all
the agents at all times. In order to benchmark the performance of the proposed
distributed estimator with that of the centralized nonlinear
least squares estimator, the asymptotic normality of the estimate sequence is
established and the asymptotic covariance of the distributed estimator is
evaluated. Finally, simulation results are presented which illustrate and
verify the analytical findings.Comment: 28 pages. Initial Submission: Feb. 2016, Revised: July 2016,
Accepted: September 2016, To appear in IEEE Transactions on Signal and
Information Processing over Networks: Special Issue on Inference and Learning
over Network
Adaptation and learning over networks for nonlinear system modeling
In this chapter, we analyze nonlinear filtering problems in distributed
environments, e.g., sensor networks or peer-to-peer protocols. In these
scenarios, the agents in the environment receive measurements in a streaming
fashion, and they are required to estimate a common (nonlinear) model by
alternating local computations and communications with their neighbors. We
focus on the important distinction between single-task problems, where the
underlying model is common to all agents, and multitask problems, where each
agent might converge to a different model due to, e.g., spatial dependencies or
other factors. Currently, most of the literature on distributed learning in the
nonlinear case has focused on the single-task case, which may be a strong
limitation in real-world scenarios. After introducing the problem and reviewing
the existing approaches, we describe a simple kernel-based algorithm tailored
for the multitask case. We evaluate the proposal on a simulated benchmark task,
and we conclude by detailing currently open problems and lines of research.Comment: To be published as a chapter in `Adaptive Learning Methods for
Nonlinear System Modeling', Elsevier Publishing, Eds. D. Comminiello and J.C.
Principe (2018
Distributed Nonconvex Multiagent Optimization Over Time-Varying Networks
We study nonconvex distributed optimization in multiagent networks where the
communications between nodes is modeled as a time-varying sequence of arbitrary
digraphs. We introduce a novel broadcast-based distributed algorithmic
framework for the (constrained) minimization of the sum of a smooth (possibly
nonconvex and nonseparable) function, i.e., the agents' sum-utility, plus a
convex (possibly nonsmooth and nonseparable) regularizer. The latter is usually
employed to enforce some structure in the solution, typically sparsity. The
proposed method hinges on Successive Convex Approximation (SCA) techniques
coupled with i) a tracking mechanism instrumental to locally estimate the
gradients of agents' cost functions; and ii) a novel broadcast protocol to
disseminate information and distribute the computation among the agents.
Asymptotic convergence to stationary solutions is established. A key feature of
the proposed algorithm is that it neither requires the double-stochasticity of
the consensus matrices (but only column stochasticity) nor the knowledge of the
graph sequence to implement. To the best of our knowledge, the proposed
framework is the first broadcast-based distributed algorithm for convex and
nonconvex constrained optimization over arbitrary, time-varying digraphs.
Numerical results show that our algorithm outperforms current schemes on both
convex and nonconvex problems.Comment: Copyright 2001 SS&C. Published in the Proceedings of the 50th annual
Asilomar conference on signals, systems, and computers, Nov. 6-9, 2016, CA,
US
- …