1,254 research outputs found

    An extremal problem for integer sparse recovery

    Full text link
    Motivated by the problem of integer sparse recovery we study the following question. Let AA be an m×dm \times d integer matrix whose entries are in absolute value at most kk. How large can be d=d(m,k)d=d(m,k) if all m×mm \times m submatrices of AA are non-degenerate? We obtain new upper and lower bounds on dd and answer a special case of the problem by Brass, Moser and Pach on covering mm-dimensional k×⋯×kk \times \cdots\times k grid by linear subspaces

    Super-resolution, Extremal Functions and the Condition Number of Vandermonde Matrices

    Get PDF
    Super-resolution is a fundamental task in imaging, where the goal is to extract fine-grained structure from coarse-grained measurements. Here we are interested in a popular mathematical abstraction of this problem that has been widely studied in the statistics, signal processing and machine learning communities. We exactly resolve the threshold at which noisy super-resolution is possible. In particular, we establish a sharp phase transition for the relationship between the cutoff frequency (mm) and the separation (Δ\Delta). If m>1/Δ+1m > 1/\Delta + 1, our estimator converges to the true values at an inverse polynomial rate in terms of the magnitude of the noise. And when m<(1−ϵ)/Δm < (1-\epsilon) /\Delta no estimator can distinguish between a particular pair of Δ\Delta-separated signals even if the magnitude of the noise is exponentially small. Our results involve making novel connections between {\em extremal functions} and the spectral properties of Vandermonde matrices. We establish a sharp phase transition for their condition number which in turn allows us to give the first noise tolerance bounds for the matrix pencil method. Moreover we show that our methods can be interpreted as giving preconditioners for Vandermonde matrices, and we use this observation to design faster algorithms for super-resolution. We believe that these ideas may have other applications in designing faster algorithms for other basic tasks in signal processing.Comment: 19 page

    Exact Reconstruction using Beurling Minimal Extrapolation

    Full text link
    We show that measures with finite support on the real line are the unique solution to an algorithm, named generalized minimal extrapolation, involving only a finite number of generalized moments (which encompass the standard moments, the Laplace transform, the Stieltjes transformation, etc). Generalized minimal extrapolation shares related geometric properties with basis pursuit of Chen, Donoho and Saunders [CDS98]. Indeed we also extend some standard results of compressed sensing (the dual polynomial, the nullspace property) to the signed measure framework. We express exact reconstruction in terms of a simple interpolation problem. We prove that every nonnegative measure, supported by a set containing s points,can be exactly recovered from only 2s + 1 generalized moments. This result leads to a new construction of deterministic sensing matrices for compressed sensing.Comment: 27 pages, 3 figures version 2 : minor changes and new titl

    Highly robust error correction by convex programming

    Full text link
    This paper discusses a stylized communications problem where one wishes to transmit a real-valued signal x in R^n (a block of n pieces of information) to a remote receiver. We ask whether it is possible to transmit this information reliably when a fraction of the transmitted codeword is corrupted by arbitrary gross errors, and when in addition, all the entries of the codeword are contaminated by smaller errors (e.g. quantization errors). We show that if one encodes the information as Ax where A is a suitable m by n coding matrix (m >= n), there are two decoding schemes that allow the recovery of the block of n pieces of information x with nearly the same accuracy as if no gross errors occur upon transmission (or equivalently as if one has an oracle supplying perfect information about the sites and amplitudes of the gross errors). Moreover, both decoding strategies are very concrete and only involve solving simple convex optimization programs, either a linear program or a second-order cone program. We complement our study with numerical simulations showing that the encoder/decoder pair performs remarkably well.Comment: 23 pages, 2 figure
    • …
    corecore