4,657 research outputs found

    Dynamic anomaly detection in sensor networks

    Get PDF
    In the problem of quickest change detection, a sequence of random variables is observed sequentially by a decision maker. At some unknown time instant, the emergence of an anomaly leads to a change in the distribution of the observations. The goal in quickest change detection is to detect this change as quickly as possible, subject to constraints on the frequency of false alarm events. One important application of the theory of quickest change detection is in the context of anomaly detection in sensor networks used to monitor engineering systems. Sensor network related detection problems can vary significantly depending on the spatial evolution of the anomaly in the network as time progresses. Settings involving static anomalies, i.e., anomalies that are perceived by all sensors concurrently and that affect sensors persistently, have been extensively studied in the quickest change detection literature. In addition, semi-dynamic quickest change detection settings that involve anomalies that affect sensors at different time instants, albeit in a persistent manner, have recently received more attention. In this dissertation, our goal is to study the problem of dynamic anomaly detection in sensor networks, i.e., the case where anomalies may not affect sensors persistently, but may move around the network affecting different sets of sensors with time. The objective is to design anomaly detection procedures that are provably optimal with respect to delay-false alarm trade-off formulations. We study the quickest dynamic anomaly detection problem under multiple settings by imposing different assumptions on the spatial evolution of the anomaly. In particular, we consider the case where anomalies evolve according to a discrete-time Markov chain model, for which we develop asymptotically optimal procedures which we compare with more computationally feasible heuristic detection algorithms that require less model knowledge. The Markov model definition incorporates anomalies the size of which may be constant or vary with time. In addition, we study the worst-path dynamic anomaly detection setting, where we assume that the trajectory of the anomaly is unknown and deterministic, and that candidate detection procedures are evaluated according to the anomaly path that maximizes their detection delay. We consider the worst-path setting under the assumption that the anomaly affects a fixed size of sensors, as well as study the problem of worst-path anomaly detection when the size of the anomaly changes with time. For the two worst-path settings we establish that algorithms from quickest change detection literature can be modified to result in provably asymptotically optimal, and in some cases, exactly optimal procedures. A detailed performance analysis of the proposed algorithms is conducted, and concise guidelines regarding the design of proposed tests are provided. Numerical studies of the proposed detection schemes are presented for all studied settings and for a variety of test cases, such as different network sizes, probability distributions, and degrees of model knowledge. Finally, we outline problems of interest for future work, such as the extension of proposed algorithms and techniques in settings where model knowledge is limited

    Data-Efficient Quickest Change Detection with On-Off Observation Control

    Full text link
    In this paper we extend the Shiryaev's quickest change detection formulation by also accounting for the cost of observations used before the change point. The observation cost is captured through the average number of observations used in the detection process before the change occurs. The objective is to select an on-off observation control policy, that decides whether or not to take a given observation, along with the stopping time at which the change is declared, so as to minimize the average detection delay, subject to constraints on both the probability of false alarm and the observation cost. By considering a Lagrangian relaxation of the constraint problem, and using dynamic programming arguments, we obtain an \textit{a posteriori} probability based two-threshold algorithm that is a generalized version of the classical Shiryaev algorithm. We provide an asymptotic analysis of the two-threshold algorithm and show that the algorithm is asymptotically optimal, i.e., the performance of the two-threshold algorithm approaches that of the Shiryaev algorithm, for a fixed observation cost, as the probability of false alarm goes to zero. We also show, using simulations, that the two-threshold algorithm has good observation cost-delay trade-off curves, and provides significant reduction in observation cost as compared to the naive approach of fractional sampling, where samples are skipped randomly. Our analysis reveals that, for practical choices of constraints, the two thresholds can be set independent of each other: one based on the constraint of false alarm and another based on the observation cost constraint alone.Comment: Preliminary version of this paper has been presented at ITA Workshop UCSD 201
    • …
    corecore