3,457 research outputs found
Large Deviations Performance of Consensus+Innovations Distributed Detection with Non-Gaussian Observations
We establish the large deviations asymptotic performance (error exponent) of
consensus+innovations distributed detection over random networks with generic
(non-Gaussian) sensor observations. At each time instant, sensors 1) combine
theirs with the decision variables of their neighbors (consensus) and 2)
assimilate their new observations (innovations). This paper shows for general
non-Gaussian distributions that consensus+innovations distributed detection
exhibits a phase transition behavior with respect to the network degree of
connectivity. Above a threshold, distributed is as good as centralized, with
the same optimal asymptotic detection performance, but, below the threshold,
distributed detection is suboptimal with respect to centralized detection. We
determine this threshold and quantify the performance loss below threshold.
Finally, we show the dependence of the threshold and performance on the
distribution of the observations: distributed detectors over the same random
network, but with different observations' distributions, for example, Gaussian,
Laplace, or quantized, may have different asymptotic performance, even when the
corresponding centralized detectors have the same asymptotic performance.Comment: 30 pages, journal, submitted Nov 17, 2011; revised Apr 3, 201
Consensus and Products of Random Stochastic Matrices: Exact Rate for Convergence in Probability
Distributed consensus and other linear systems with system stochastic
matrices emerge in various settings, like opinion formation in social
networks, rendezvous of robots, and distributed inference in sensor networks.
The matrices are often random, due to, e.g., random packet dropouts in
wireless sensor networks. Key in analyzing the performance of such systems is
studying convergence of matrix products . In this paper, we
find the exact exponential rate for the convergence in probability of the
product of such matrices when time grows large, under the assumption that
the 's are symmetric and independent identically distributed in time.
Further, for commonly used random models like with gossip and link failure, we
show that the rate is found by solving a min-cut problem and, hence, easily
computable. Finally, we apply our results to optimally allocate the sensors'
transmission power in consensus+innovations distributed detection
Distributed Constrained Recursive Nonlinear Least-Squares Estimation: Algorithms and Asymptotics
This paper focuses on the problem of recursive nonlinear least squares
parameter estimation in multi-agent networks, in which the individual agents
observe sequentially over time an independent and identically distributed
(i.i.d.) time-series consisting of a nonlinear function of the true but unknown
parameter corrupted by noise. A distributed recursive estimator of the
\emph{consensus} + \emph{innovations} type, namely , is
proposed, in which the agents update their parameter estimates at each
observation sampling epoch in a collaborative way by simultaneously processing
the latest locally sensed information~(\emph{innovations}) and the parameter
estimates from other agents~(\emph{consensus}) in the local neighborhood
conforming to a pre-specified inter-agent communication topology. Under rather
weak conditions on the connectivity of the inter-agent communication and a
\emph{global observability} criterion, it is shown that at every network agent,
the proposed algorithm leads to consistent parameter estimates. Furthermore,
under standard smoothness assumptions on the local observation functions, the
distributed estimator is shown to yield order-optimal convergence rates, i.e.,
as far as the order of pathwise convergence is concerned, the local parameter
estimates at each agent are as good as the optimal centralized nonlinear least
squares estimator which would require access to all the observations across all
the agents at all times. In order to benchmark the performance of the proposed
distributed estimator with that of the centralized nonlinear
least squares estimator, the asymptotic normality of the estimate sequence is
established and the asymptotic covariance of the distributed estimator is
evaluated. Finally, simulation results are presented which illustrate and
verify the analytical findings.Comment: 28 pages. Initial Submission: Feb. 2016, Revised: July 2016,
Accepted: September 2016, To appear in IEEE Transactions on Signal and
Information Processing over Networks: Special Issue on Inference and Learning
over Network
Diffusion-Based Adaptive Distributed Detection: Steady-State Performance in the Slow Adaptation Regime
This work examines the close interplay between cooperation and adaptation for
distributed detection schemes over fully decentralized networks. The combined
attributes of cooperation and adaptation are necessary to enable networks of
detectors to continually learn from streaming data and to continually track
drifts in the state of nature when deciding in favor of one hypothesis or
another. The results in the paper establish a fundamental scaling law for the
steady-state probabilities of miss-detection and false-alarm in the slow
adaptation regime, when the agents interact with each other according to
distributed strategies that employ small constant step-sizes. The latter are
critical to enable continuous adaptation and learning. The work establishes
three key results. First, it is shown that the output of the collaborative
process at each agent has a steady-state distribution. Second, it is shown that
this distribution is asymptotically Gaussian in the slow adaptation regime of
small step-sizes. And third, by carrying out a detailed large deviations
analysis, closed-form expressions are derived for the decaying rates of the
false-alarm and miss-detection probabilities. Interesting insights are gained.
In particular, it is verified that as the step-size decreases, the error
probabilities are driven to zero exponentially fast as functions of ,
and that the error exponents increase linearly in the number of agents. It is
also verified that the scaling laws governing errors of detection and errors of
estimation over networks behave very differently, with the former having an
exponential decay proportional to , while the latter scales linearly
with decay proportional to . It is shown that the cooperative strategy
allows each agent to reach the same detection performance, in terms of
detection error exponents, of a centralized stochastic-gradient solution.Comment: The paper will appear in IEEE Trans. Inf. Theor
- …