82 research outputs found

    Adaptive sampling for linear state estimation

    Get PDF
    When a sensor has continuous measurements but sends occasional messages over a data network to a supervisor which estimates the state, the available packet rate fixes the achievable quality of state estimation. When such rate limits turn stringent, the sensor’s messaging policy should be designed anew. What are the good causal messaging policies ? What should message packets contain ? What is the lowest possible distortion in a causal estimate at the supervisor ? Is Delta sampling better than periodic sampling ? We answer these questions for a Markov state process under an idealized model of the network and the assumption of perfect state measurements at the sensor. If the state is a scalar, or a vector of low dimension, then we can ignore sample quantization. If in addition, we can ignore jitter in the transmission delays over the network, then our search for efficient messaging policies simplifies. Firstly, each message packet should contain the value of the state at that time. Thus a bound on the number of data packets becomes a bound on the number of state samples. Secondly, the remaining choice in messaging is entirely about the times when samples are taken. For a scalar, linear diffusion process, we study the problem of choosing the causal sampling times that will give the lowest aggregate squared error distortion. We stick to finite-horizons and impose a hard upper bound N on the number of allowed samples. We cast the design as a problem of choosing an optimal sequence of stopping times. We reduce this to a nested sequence of problems, each asking for a single optimal stopping time. Under an unproven but natural assumption about the least-square estimate at the supervisor, each of these single stopping problems are of standard form. The optimal stopping times are random times when the estimation error exceeds designed envelopes. For the case where the state is a Brownian motion, we give analytically: the shape of the optimal sampling envelopes, the shape of the envelopes under optimal Delta sampling, and their performances. Surprisingly, we find that Delta sampling performs badly. Hence, when the rate constraint is a hard limit on the number of samples over a finite horizon, we should should not use Delta sampling

    Numerical Comparison of Cusum and Shiryaev-Roberts Procedures for Detecting Changes in Distributions

    Full text link
    The CUSUM procedure is known to be optimal for detecting a change in distribution under a minimax scenario, whereas the Shiryaev-Roberts procedure is optimal for detecting a change that occurs at a distant time horizon. As a simpler alternative to the conventional Monte Carlo approach, we propose a numerical method for the systematic comparison of the two detection schemes in both settings, i.e., minimax and for detecting changes that occur in the distant future. Our goal is accomplished by deriving a set of exact integral equations for the performance metrics, which are then solved numerically. We present detailed numerical results for the problem of detecting a change in the mean of a Gaussian sequence, which show that the difference between the two procedures is significant only when detecting small changes.Comment: 21 pages, 8 figures, to appear in Communications in Statistics - Theory and Method

    The asymptotic local approach to change detection and model validation

    Full text link

    An Exact Formula for the Average Run Length to False Alarm of the Generalized Shiryaev-Roberts Procedure for Change-Point Detection under Exponential Observations

    Full text link
    We derive analytically an exact closed-form formula for the standard minimax Average Run Length (ARL) to false alarm delivered by the Generalized Shiryaev-Roberts (GSR) change-point detection procedure devised to detect a shift in the baseline mean of a sequence of independent exponentially distributed observations. Specifically, the formula is found through direct solution of the respective integral (renewal) equation, and is a general result in that the GSR procedure's headstart is not restricted to a bounded range, nor is there a "ceiling" value for the detection threshold. Apart from the theoretical significance (in change-point detection, exact closed-form performance formulae are typically either difficult or impossible to get, especially for the GSR procedure), the obtained formula is also useful to a practitioner: in cases of practical interest, the formula is a function linear in both the detection threshold and the headstart, and, therefore, the ARL to false alarm of the GSR procedure can be easily computed.Comment: 9 pages; Accepted for publication in Proceedings of the 12-th German-Polish Workshop on Stochastic Models, Statistics and Their Application

    Optimality of the CUSUM procedure in continuous time

    No full text
    The optimality of CUSUM under a Lorden-type criterion setting is considered. We demonstrate the optimality of the CUSUM test for lto processes, in a sense similar to Lorden's, but with a criterion that replaces expected delays by the corresponding Kullback-Leibler divergence

    Decentralized CUSUM change detection

    No full text
    We consider the problem of decentralized change detection using the CUSUM test. More than one sensors acquire independent signals and send their quantized version to a fusion center that uses this information to detect a simultaneous change in all sensors. By introducing a recurrence relation that defines the optimum performance of the CUSUM test for given quantization, we further optimize this measure with respect to the quantization scheme. We compare the resulting optimum test with a simple, asynchronous one shot strategy, where each sensor performs a local CUSUM test and communicates with the fusion center only once to signal its detection
    • …
    corecore