2 research outputs found
Learning and Prediction Theory of Distributed Least Squares
With the fast development of the sensor and network technology, distributed
estimation has attracted more and more attention, due to its capability in
securing communication, in sustaining scalability, and in enhancing safety and
privacy. In this paper, we consider a least-squares (LS)-based distributed
algorithm build on a sensor network to estimate an unknown parameter vector of
a dynamical system, where each sensor in the network has partial information
only but is allowed to communicate with its neighbors. Our main task is to
generalize the well-known theoretical results on the traditional LS to the
current distributed case by establishing both the upper bound of the
accumulated regrets of the adaptive predictor and the convergence of the
distributed LS estimator, with the following key features compared with the
existing literature on distributed estimation: Firstly, our theory does not
need the previously imposed independence, stationarity or Gaussian property on
the system signals, and hence is applicable to stochastic systems with feedback
control. Secondly, the cooperative excitation condition introduced and used in
this paper for the convergence of the distributed LS estimate is the weakest
possible one, which shows that even if any individual sensor cannot estimate
the unknown parameter by the traditional LS, the whole network can still
fulfill the estimation task by the distributed LS. Moreover, our theoretical
analysis is also different from the existing ones for distributed LS, because
it is an integration of several powerful techniques including stochastic
Lyapunov functions, martingale convergence theorems, and some inequalities on
convex combination of nonnegative definite matrices.Comment: 14 pages, submitted to IEEE Transactions on Automatic Contro