4 research outputs found

    Adaptive compressed sensing for support recovery of structured sparse sets

    No full text
    \u3cp\u3eThis paper investigates the problem of recovering the support of structured signals via adaptive compressive sensing. We examine several classes of structured support sets, and characterize the fundamental limits of accurately recovering such sets through compressive measurements, while simultaneously providing adaptive support recovery protocols that perform near optimally for these classes. We show that by adaptively designing the sensing matrix, we can attain significant performance gains over non-adaptive protocols. These gains arise from the fact that adaptive sensing can: 1) better mitigate the effects of noise and 2) better capitalize on the structure of the support sets.\u3c/p\u3

    Adaptive compressed sensing for estimation of structured sparse sets

    No full text
    This paper investigates the problem of estimating the support of structured signals via adaptive compressive sensing. We examine several classes of structured support sets, and characterize the fundamental limits of accurately estimating such sets through compressive measurements, while simultaneously providing adaptive support recovery protocols that perform near optimally for these classes. We show that by adaptively designing the sensing matrix we can attain significant performance gains over non-adaptive protocols. These gains arise from the fact that adaptive sensing can: (i) better mitigate the effects of noise, and (ii) better capitalize on the structure of the support sets

    Are there needles in a moving haystack?:adaptive sensing for detection of dynamically evolving signals

    No full text
    \u3cp\u3eIn this paper, we investigate the problem of detecting dynamically evolving signals. We model the signal as an n dimensional vector that is either zero or has s non-zero components. At each time step t ∈ N the nonzero components change their location independently with probability p. The statistical problem is to decide whether the signal is a zero vector or in fact it has non-zero components. This decision is based on m noisy observations of individual signal components collected at times t = 1, . . ., m. We consider two different sensing paradigms, namely adaptive and non-adaptive sensing. For non-adaptive sensing, the choice of components to measure has to be decided before the data collection process started, while for adaptive sensing one can adjust the sensing process based on observations collected earlier. We characterize the difficulty of this detection problem in both sensing paradigms in terms of the aforementioned parameters, with special interest to the speed of change of the active components. In addition, we provide an adaptive sensing algorithm for this problem and contrast its performance to that of non-adaptive detection algorithms.\u3c/p\u3

    Distribution-free detection of structured anomalies:permutation and rank-based scans

    No full text
    \u3cp\u3eThe scan statistic is by far the most popular method for anomaly detection, being popular in syndromic surveillance, signal and image processing, and target detection based on sensor networks, among other applications. The use of the scan statistics in such settings yields a hypothesis testing procedure, where the null hypothesis corresponds to the absence of anomalous behavior. If the null distribution is known, then calibration of a scan-based test is relatively easy, as it can be done by Monte Carlo simulation. When the null distribution is unknown, it is less straightforward. We investigate two procedures. The first one is a calibration by permutation and the other is a rank-based scan test, which is distribution-free and less sensitive to outliers. Furthermore, the rank scan test requires only a one-time calibration for a given data size making it computationally much more appealing. In both cases, we quantify the performance loss with respect to an oracle scan test that knows the null distribution. We show that using one of these calibration procedures results in only a very small loss of power in the context of a natural exponential family. This includes the classical normal location model, popular in signal processing, and the Poisson model, popular in syndromic surveillance. We perform numerical experiments on simulated data further supporting our theory and also on a real dataset from genomics. Supplementary materials for this article are available online.\u3c/p\u3
    corecore