209 research outputs found

    Quickest Change Detection of a Markov Process Across a Sensor Array

    Full text link
    Recent attention in quickest change detection in the multi-sensor setting has been on the case where the densities of the observations change at the same instant at all the sensors due to the disruption. In this work, a more general scenario is considered where the change propagates across the sensors, and its propagation can be modeled as a Markov process. A centralized, Bayesian version of this problem, with a fusion center that has perfect information about the observations and a priori knowledge of the statistics of the change process, is considered. The problem of minimizing the average detection delay subject to false alarm constraints is formulated as a partially observable Markov decision process (POMDP). Insights into the structure of the optimal stopping rule are presented. In the limiting case of rare disruptions, we show that the structure of the optimal test reduces to thresholding the a posteriori probability of the hypothesis that no change has happened. We establish the asymptotic optimality (in the vanishing false alarm probability regime) of this threshold test under a certain condition on the Kullback-Leibler (K-L) divergence between the post- and the pre-change densities. In the special case of near-instantaneous change propagation across the sensors, this condition reduces to the mild condition that the K-L divergence be positive. Numerical studies show that this low complexity threshold test results in a substantial improvement in performance over naive tests such as a single-sensor test or a test that wrongly assumes that the change propagates instantaneously.Comment: 40 pages, 5 figures, Submitted to IEEE Trans. Inform. Theor

    Capacity Results for Block-Stationary Gaussian Fading Channels with a Peak Power Constraint

    Full text link
    We consider a peak-power-limited single-antenna block-stationary Gaussian fading channel where neither the transmitter nor the receiver knows the channel state information, but both know the channel statistics. This model subsumes most previously studied Gaussian fading models. We first compute the asymptotic channel capacity in the high SNR regime and show that the behavior of channel capacity depends critically on the channel model. For the special case where the fading process is symbol-by-symbol stationary, we also reveal a fundamental interplay between the codeword length, communication rate, and decoding error probability. Specifically, we show that the codeword length must scale with SNR in order to guarantee that the communication rate can grow logarithmically with SNR with bounded decoding error probability, and we find a necessary condition for the growth rate of the codeword length. We also derive an expression for the capacity per unit energy. Furthermore, we show that the capacity per unit energy is achievable using temporal ON-OFF signaling with optimally allocated ON symbols, where the optimal ON-symbol allocation scheme may depend on the peak power constraint.Comment: Submitted to the IEEE Transactions on Information Theor

    Data-Efficient Quickest Outlying Sequence Detection in Sensor Networks

    Full text link
    A sensor network is considered where at each sensor a sequence of random variables is observed. At each time step, a processed version of the observations is transmitted from the sensors to a common node called the fusion center. At some unknown point in time the distribution of observations at an unknown subset of the sensor nodes changes. The objective is to detect the outlying sequences as quickly as possible, subject to constraints on the false alarm rate, the cost of observations taken at each sensor, and the cost of communication between the sensors and the fusion center. Minimax formulations are proposed for the above problem and algorithms are proposed that are shown to be asymptotically optimal for the proposed formulations, as the false alarm rate goes to zero. It is also shown, via numerical studies, that the proposed algorithms perform significantly better than those based on fractional sampling, in which the classical algorithms from the literature are used and the constraint on the cost of observations is met by using the outcome of a sequence of biased coin tosses, independent of the observation process.Comment: Submitted to IEEE Transactions on Signal Processing, Nov 2014. arXiv admin note: text overlap with arXiv:1408.474

    Data-Efficient Quickest Change Detection with On-Off Observation Control

    Full text link
    In this paper we extend the Shiryaev's quickest change detection formulation by also accounting for the cost of observations used before the change point. The observation cost is captured through the average number of observations used in the detection process before the change occurs. The objective is to select an on-off observation control policy, that decides whether or not to take a given observation, along with the stopping time at which the change is declared, so as to minimize the average detection delay, subject to constraints on both the probability of false alarm and the observation cost. By considering a Lagrangian relaxation of the constraint problem, and using dynamic programming arguments, we obtain an \textit{a posteriori} probability based two-threshold algorithm that is a generalized version of the classical Shiryaev algorithm. We provide an asymptotic analysis of the two-threshold algorithm and show that the algorithm is asymptotically optimal, i.e., the performance of the two-threshold algorithm approaches that of the Shiryaev algorithm, for a fixed observation cost, as the probability of false alarm goes to zero. We also show, using simulations, that the two-threshold algorithm has good observation cost-delay trade-off curves, and provides significant reduction in observation cost as compared to the naive approach of fractional sampling, where samples are skipped randomly. Our analysis reveals that, for practical choices of constraints, the two thresholds can be set independent of each other: one based on the constraint of false alarm and another based on the observation cost constraint alone.Comment: Preliminary version of this paper has been presented at ITA Workshop UCSD 201

    Incremental Stochastic Subgradient Algorithms for Convex Optimization

    Full text link
    In this paper we study the effect of stochastic errors on two constrained incremental sub-gradient algorithms. We view the incremental sub-gradient algorithms as decentralized network optimization algorithms as applied to minimize a sum of functions, when each component function is known only to a particular agent of a distributed network. We first study the standard cyclic incremental sub-gradient algorithm in which the agents form a ring structure and pass the iterate in a cycle. We consider the method with stochastic errors in the sub-gradient evaluations and provide sufficient conditions on the moments of the stochastic errors that guarantee almost sure convergence when a diminishing step-size is used. We also obtain almost sure bounds on the algorithm's performance when a constant step-size is used. We then consider \ram{the} Markov randomized incremental subgradient method, which is a non-cyclic version of the incremental algorithm where the sequence of computing agents is modeled as a time non-homogeneous Markov chain. Such a model is appropriate for mobile networks, as the network topology changes across time in these networks. We establish the convergence results and error bounds for the Markov randomized method in the presence of stochastic errors for diminishing and constant step-sizes, respectively
    • …
    corecore