937 research outputs found
Theoretical Bounds in Minimax Decentralized Hypothesis Testing
Minimax decentralized detection is studied under two scenarios: with and
without a fusion center when the source of uncertainty is the Bayesian prior.
When there is no fusion center, the constraints in the network design are
determined. Both for a single decision maker and multiple decision makers, the
maximum loss in detection performance due to minimax decision making is
obtained. In the presence of a fusion center, the maximum loss of detection
performance between with- and without fusion center networks is derived
assuming that both networks are minimax robust. The results are finally
generalized.Comment: Submitted to IEEE Trans. on Signal Processin
Detection of an anomalous cluster in a network
We consider the problem of detecting whether or not, in a given sensor
network, there is a cluster of sensors which exhibit an "unusual behavior."
Formally, suppose we are given a set of nodes and attach a random variable to
each node. We observe a realization of this process and want to decide between
the following two hypotheses: under the null, the variables are i.i.d. standard
normal; under the alternative, there is a cluster of variables that are i.i.d.
normal with positive mean and unit variance, while the rest are i.i.d. standard
normal. We also address surveillance settings where each sensor in the network
collects information over time. The resulting model is similar, now with a time
series attached to each node. We again observe the process over time and want
to decide between the null, where all the variables are i.i.d. standard normal,
and the alternative, where there is an emerging cluster of i.i.d. normal
variables with positive mean and unit variance. The growth models used to
represent the emerging cluster are quite general and, in particular, include
cellular automata used in modeling epidemics. In both settings, we consider
classes of clusters that are quite general, for which we obtain a lower bound
on their respective minimax detection rate and show that some form of scan
statistic, by far the most popular method in practice, achieves that same rate
to within a logarithmic factor. Our results are not limited to the normal
location model, but generalize to any one-parameter exponential family when the
anomalous clusters are large enough.Comment: Published in at http://dx.doi.org/10.1214/10-AOS839 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Mean Estimation from One-Bit Measurements
We consider the problem of estimating the mean of a symmetric log-concave
distribution under the constraint that only a single bit per sample from this
distribution is available to the estimator. We study the mean squared error as
a function of the sample size (and hence the number of bits). We consider three
settings: first, a centralized setting, where an encoder may release bits
given a sample of size , and for which there is no asymptotic penalty for
quantization; second, an adaptive setting in which each bit is a function of
the current observation and previously recorded bits, where we show that the
optimal relative efficiency compared to the sample mean is precisely the
efficiency of the median; lastly, we show that in a distributed setting where
each bit is only a function of a local sample, no estimator can achieve optimal
efficiency uniformly over the parameter space. We additionally complement our
results in the adaptive setting by showing that \emph{one} round of adaptivity
is sufficient to achieve optimal mean-square error
Quickest Change Detection of a Markov Process Across a Sensor Array
Recent attention in quickest change detection in the multi-sensor setting has
been on the case where the densities of the observations change at the same
instant at all the sensors due to the disruption. In this work, a more general
scenario is considered where the change propagates across the sensors, and its
propagation can be modeled as a Markov process. A centralized, Bayesian version
of this problem, with a fusion center that has perfect information about the
observations and a priori knowledge of the statistics of the change process, is
considered. The problem of minimizing the average detection delay subject to
false alarm constraints is formulated as a partially observable Markov decision
process (POMDP). Insights into the structure of the optimal stopping rule are
presented. In the limiting case of rare disruptions, we show that the structure
of the optimal test reduces to thresholding the a posteriori probability of the
hypothesis that no change has happened. We establish the asymptotic optimality
(in the vanishing false alarm probability regime) of this threshold test under
a certain condition on the Kullback-Leibler (K-L) divergence between the post-
and the pre-change densities. In the special case of near-instantaneous change
propagation across the sensors, this condition reduces to the mild condition
that the K-L divergence be positive. Numerical studies show that this low
complexity threshold test results in a substantial improvement in performance
over naive tests such as a single-sensor test or a test that wrongly assumes
that the change propagates instantaneously.Comment: 40 pages, 5 figures, Submitted to IEEE Trans. Inform. Theor
- …