6,350 research outputs found
The value of feedback for decentralized detection in large sensor networks
We consider the decentralized binary hypothesis testing problem in networks with feedback, where some or all of the sensors have access to compressed summaries of other sensors' observations. We study certain two-message feedback architectures, in which every sensor sends two messages to a fusion center, with the second message based on full or partial knowledge of the first messages of the other sensors. Under either a Neyman-Pearson or a Bayesian formulation, we show that the asymptotically optimal (in the limit of a large number of sensors) detection performance (as quantified by error exponents) does not benefit from the feedback messages
Information bounds and quickest change detection in decentralized decision systems
The quickest change detection problem is studied in decentralized decision systems, where a set of sensors receive independent observations and send summary messages to the fusion center, which makes a final decision. In the system where the sensors do not have access to their past observations, the previously conjectured asymptotic optimality of a procedure with a monotone likelihood ratio quantizer (MLRQ) is proved. In the case of additive Gaussian sensor noise, if the signal-to-noise ratios (SNR) at some sensors are sufficiently high, this procedure can perform as well as the optimal centralized procedure that has access to all the sensor observations. Even if all SNRs are low, its detection delay will be at most pi/2-1 approximate to 57% larger than that of the optimal centralized procedure. Next, in the system where the sensors have full access to their past observations, the first asymptotically optimal procedure in the literature is developed. Surprisingly, the procedure has the same asymptotic performance as the optimal centralized procedure, although it may perform poorly in some practical situations because of slow asymptotic convergence. Finally, it is shown that neither past message information nor the feedback from the fusion center improves the asymptotic performance in the simplest model
Asymptotic Optimality Theory For Decentralized Sequential Multihypothesis Testing Problems
The Bayesian formulation of sequentially testing hypotheses is
studied in the context of a decentralized sensor network system. In such a
system, local sensors observe raw observations and send quantized sensor
messages to a fusion center which makes a final decision when stopping taking
observations. Asymptotically optimal decentralized sequential tests are
developed from a class of "two-stage" tests that allows the sensor network
system to make a preliminary decision in the first stage and then optimize each
local sensor quantizer accordingly in the second stage. It is shown that the
optimal local quantizer at each local sensor in the second stage can be defined
as a maximin quantizer which turns out to be a randomization of at most
unambiguous likelihood quantizers (ULQ). We first present in detail our results
for the system with a single sensor and binary sensor messages, and then extend
to more general cases involving any finite alphabet sensor messages, multiple
sensors, or composite hypotheses.Comment: 14 pages, 1 figure, submitted to IEEE Trans. Inf. Theor
Bibliographic Review on Distributed Kalman Filtering
In recent years, a compelling need has arisen to understand the effects of distributed information structures on estimation and filtering. In this paper, a bibliographical review on distributed Kalman filtering (DKF) is provided.\ud
The paper contains a classification of different approaches and methods involved to DKF. The applications of DKF are also discussed and explained separately. A comparison of different approaches is briefly carried out. Focuses on the contemporary research are also addressed with emphasis on the practical applications of the techniques. An exhaustive list of publications, linked directly or indirectly to DKF in the open literature, is compiled to provide an overall picture of different developing aspects of this area
Keep Ballots Secret: On the Futility of Social Learning in Decision Making by Voting
We show that social learning is not useful in a model of team binary decision
making by voting, where each vote carries equal weight. Specifically, we
consider Bayesian binary hypothesis testing where agents have any
conditionally-independent observation distribution and their local decisions
are fused by any L-out-of-N fusion rule. The agents make local decisions
sequentially, with each allowed to use its own private signal and all precedent
local decisions. Though social learning generally occurs in that precedent
local decisions affect an agent's belief, optimal team performance is obtained
when all precedent local decisions are ignored. Thus, social learning is
futile, and secret ballots are optimal. This contrasts with typical studies of
social learning because we include a fusion center rather than concentrating on
the performance of the latest-acting agents
- …