2,744 research outputs found
Asymptotically Optimal Sampling Policy for Quickest Change Detection with Observation-Switching Cost
We consider the problem of quickest change detection (QCD) in a signal where
its observations are obtained using a set of actions, and switching from one
action to another comes with a cost. The objective is to design a stopping rule
consisting of a sampling policy to determine the sequence of actions used to
observe the signal and a stopping time to quickly detect for the change,
subject to a constraint on the average observation-switching cost. We propose
an open-loop sampling policy of finite window size and a generalized likelihood
ratio (GLR) Cumulative Sum (CuSum) stopping time for the QCD problem. We show
that the GLR CuSum stopping time is asymptotically optimal with a properly
designed sampling policy and formulate the design of this sampling policy as a
quadratic programming problem. We prove that it is sufficient to consider
policies of window size not more than one when designing policies of finite
window size and propose several algorithms that solve this optimization problem
with theoretical guarantees. For observation-dependent policies, we propose a
-threshold stopping time and an observation-dependent sampling policy. We
present a method to design the observation-dependent sampling policy based on
open-loop sampling policies. Finally, we apply our approach to the problem of
QCD of a partially observed graph signal and empirically demonstrate the
performance of our proposed stopping times
Data-efficient quickest change detection
In the classical problem of quickest change detection, a decision maker observes a sequence of random variables. At some point of time, the distribution of the random variables changes abruptly. The objective is to detect this change in distribution with minimum possible delay, subject to a constraint on the false alarm rate.
In many applications of quickest change detection, changes are rare and there is a cost associated with taking observations or acquiring data. For such applications, the classical quickest change detection model is no longer applicable. In this dissertation we extend the classical formulations by adding an
additional penalty on the cost of observations used before the change point. The objective is to find a causal on-off observation control policy and a stopping time, to minimize the detection delay, subject to constraints on the false
alarm rate and the cost of observations used before the change point. We show that two-threshold generalizations of the classical
single-threshold tests are asymptotically optimal for the proposed formulations. The nature of optimality is strong in the sense that the false alarm rates of the two-threshold tests are at least as good as the false alarm rates of their classical counterparts. Also, the delays of the two-threshold tests are within a constant of the delays of their classical counterparts. These results indicate that an arbitrary but fixed fraction of observations can be skipped before change without any loss in asymptotic
performance. A detailed performance analysis of these algorithms is provided, and guidelines are given for the design of the proposed tests, on the basis of the performance analysis. An important result obtained through this analysis is that the two constraints, on the false alarm rate and the cost of observations used before the change, can be met independent of each other.
Numerical studies of these two-threshold algorithms also reveal that they have good trade-off curves, and perform significantly better than the approach of fractional sampling, where classical single threshold tests are used and the constraint on the cost of observations is met by skipping observations randomly.
We first study the problem in Bayesian and minimax settings and then extend the results to more general quickest change detection models, namely, model with
unknown post-change distribution,
a sensor network model, and a multi-channel model
Data-Efficient Quickest Outlying Sequence Detection in Sensor Networks
A sensor network is considered where at each sensor a sequence of random
variables is observed. At each time step, a processed version of the
observations is transmitted from the sensors to a common node called the fusion
center. At some unknown point in time the distribution of observations at an
unknown subset of the sensor nodes changes. The objective is to detect the
outlying sequences as quickly as possible, subject to constraints on the false
alarm rate, the cost of observations taken at each sensor, and the cost of
communication between the sensors and the fusion center. Minimax formulations
are proposed for the above problem and algorithms are proposed that are shown
to be asymptotically optimal for the proposed formulations, as the false alarm
rate goes to zero. It is also shown, via numerical studies, that the proposed
algorithms perform significantly better than those based on fractional
sampling, in which the classical algorithms from the literature are used and
the constraint on the cost of observations is met by using the outcome of a
sequence of biased coin tosses, independent of the observation process.Comment: Submitted to IEEE Transactions on Signal Processing, Nov 2014. arXiv
admin note: text overlap with arXiv:1408.474
Data-Efficient Quickest Change Detection with On-Off Observation Control
In this paper we extend the Shiryaev's quickest change detection formulation
by also accounting for the cost of observations used before the change point.
The observation cost is captured through the average number of observations
used in the detection process before the change occurs. The objective is to
select an on-off observation control policy, that decides whether or not to
take a given observation, along with the stopping time at which the change is
declared, so as to minimize the average detection delay, subject to constraints
on both the probability of false alarm and the observation cost. By considering
a Lagrangian relaxation of the constraint problem, and using dynamic
programming arguments, we obtain an \textit{a posteriori} probability based
two-threshold algorithm that is a generalized version of the classical Shiryaev
algorithm. We provide an asymptotic analysis of the two-threshold algorithm and
show that the algorithm is asymptotically optimal, i.e., the performance of the
two-threshold algorithm approaches that of the Shiryaev algorithm, for a fixed
observation cost, as the probability of false alarm goes to zero. We also show,
using simulations, that the two-threshold algorithm has good observation
cost-delay trade-off curves, and provides significant reduction in observation
cost as compared to the naive approach of fractional sampling, where samples
are skipped randomly. Our analysis reveals that, for practical choices of
constraints, the two thresholds can be set independent of each other: one based
on the constraint of false alarm and another based on the observation cost
constraint alone.Comment: Preliminary version of this paper has been presented at ITA Workshop
UCSD 201
Non-Bayesian Quickest Detection with Stochastic Sample Right Constraints
In this paper, we study the design and analysis of optimal detection scheme
for sensors that are deployed to monitor the change in the environment and are
powered by the energy harvested from the environment. In this type of
applications, detection delay is of paramount importance. We model this problem
as quickest change detection problem with a stochastic energy constraint. In
particular, a wireless sensor powered by renewable energy takes observations
from a random sequence, whose distribution will change at a certain unknown
time. Such a change implies events of interest. The energy in the sensor is
consumed by taking observations and is replenished randomly. The sensor cannot
take observations if there is no energy left in the battery. Our goal is to
design a power allocation scheme and a detection strategy to minimize the worst
case detection delay, which is the difference between the time when an alarm is
raised and the time when the change occurs. Two types of average run length
(ARL) constraint, namely an algorithm level ARL constraint and an system level
ARL constraint, are considered. We propose a low complexity scheme in which the
energy allocation rule is to spend energy to take observations as long as the
battery is not empty and the detection scheme is the Cumulative Sum test. We
show that this scheme is optimal for the formulation with the algorithm level
ARL constraint and is asymptotically optimal for the formulations with the
system level ARL constraint.Comment: 30 pages, 5 figure
- …