7,556 research outputs found

    Super-resolution, Extremal Functions and the Condition Number of Vandermonde Matrices

    Get PDF
    Super-resolution is a fundamental task in imaging, where the goal is to extract fine-grained structure from coarse-grained measurements. Here we are interested in a popular mathematical abstraction of this problem that has been widely studied in the statistics, signal processing and machine learning communities. We exactly resolve the threshold at which noisy super-resolution is possible. In particular, we establish a sharp phase transition for the relationship between the cutoff frequency (mm) and the separation (Δ\Delta). If m>1/Δ+1m > 1/\Delta + 1, our estimator converges to the true values at an inverse polynomial rate in terms of the magnitude of the noise. And when m<(1−ϵ)/Δm < (1-\epsilon) /\Delta no estimator can distinguish between a particular pair of Δ\Delta-separated signals even if the magnitude of the noise is exponentially small. Our results involve making novel connections between {\em extremal functions} and the spectral properties of Vandermonde matrices. We establish a sharp phase transition for their condition number which in turn allows us to give the first noise tolerance bounds for the matrix pencil method. Moreover we show that our methods can be interpreted as giving preconditioners for Vandermonde matrices, and we use this observation to design faster algorithms for super-resolution. We believe that these ideas may have other applications in designing faster algorithms for other basic tasks in signal processing.Comment: 19 page

    Limits on Support Recovery with Probabilistic Models: An Information-Theoretic Framework

    Get PDF
    The support recovery problem consists of determining a sparse subset of a set of variables that is relevant in generating a set of observations, and arises in a diverse range of settings such as compressive sensing, and subset selection in regression, and group testing. In this paper, we take a unified approach to support recovery problems, considering general probabilistic models relating a sparse data vector to an observation vector. We study the information-theoretic limits of both exact and partial support recovery, taking a novel approach motivated by thresholding techniques in channel coding. We provide general achievability and converse bounds characterizing the trade-off between the error probability and number of measurements, and we specialize these to the linear, 1-bit, and group testing models. In several cases, our bounds not only provide matching scaling laws in the necessary and sufficient number of measurements, but also sharp thresholds with matching constant factors. Our approach has several advantages over previous approaches: For the achievability part, we obtain sharp thresholds under broader scalings of the sparsity level and other parameters (e.g., signal-to-noise ratio) compared to several previous works, and for the converse part, we not only provide conditions under which the error probability fails to vanish, but also conditions under which it tends to one.Comment: Accepted to IEEE Transactions on Information Theory; presented in part at ISIT 2015 and SODA 201

    Secure State Estimation: Optimal Guarantees against Sensor Attacks in the Presence of Noise

    Get PDF
    Motivated by the need to secure cyber-physical systems against attacks, we consider the problem of estimating the state of a noisy linear dynamical system when a subset of sensors is arbitrarily corrupted by an adversary. We propose a secure state estimation algorithm and derive (optimal) bounds on the achievable state estimation error. In addition, as a result of independent interest, we give a coding theoretic interpretation for prior work on secure state estimation against sensor attacks in a noiseless dynamical system.Comment: A shorter version of this work will appear in the proceedings of ISIT 201

    The capacity of non-identical adaptive group testing

    Full text link
    We consider the group testing problem, in the case where the items are defective independently but with non-constant probability. We introduce and analyse an algorithm to solve this problem by grouping items together appropriately. We give conditions under which the algorithm performs essentially optimally in the sense of information-theoretic capacity. We use concentration of measure results to bound the probability that this algorithm requires many more tests than the expected number. This has applications to the allocation of spectrum to cognitive radios, in the case where a database gives prior information that a particular band will be occupied.Comment: To be presented at Allerton 201
    • …
    corecore