2,173 research outputs found

    Compressive Phase Retrieval From Squared Output Measurements Via Semidefinite Programming

    Full text link
    Given a linear system in a real or complex domain, linear regression aims to recover the model parameters from a set of observations. Recent studies in compressive sensing have successfully shown that under certain conditions, a linear program, namely, l1-minimization, guarantees recovery of sparse parameter signals even when the system is underdetermined. In this paper, we consider a more challenging problem: when the phase of the output measurements from a linear system is omitted. Using a lifting technique, we show that even though the phase information is missing, the sparse signal can be recovered exactly by solving a simple semidefinite program when the sampling rate is sufficiently high, albeit the exact solutions to both sparse signal recovery and phase retrieval are combinatorial. The results extend the type of applications that compressive sensing can be applied to those where only output magnitudes can be observed. We demonstrate the accuracy of the algorithms through theoretical analysis, extensive simulations and a practical experiment.Comment: Parts of the derivations have submitted to the 16th IFAC Symposium on System Identification, SYSID 2012, and parts to the 51st IEEE Conference on Decision and Control, CDC 201

    Compressed Sensing with off-axis frequency-shifting holography

    Get PDF
    This work reveals an experimental microscopy acquisition scheme successfully combining Compressed Sensing (CS) and digital holography in off-axis and frequency-shifting conditions. CS is a recent data acquisition theory involving signal reconstruction from randomly undersampled measurements, exploiting the fact that most images present some compact structure and redundancy. We propose a genuine CS-based imaging scheme for sparse gradient images, acquiring a diffraction map of the optical field with holographic microscopy and recovering the signal from as little as 7% of random measurements. We report experimental results demonstrating how CS can lead to an elegant and effective way to reconstruct images, opening the door for new microscopy applications.Comment: vol 35, pp 871-87

    Ridgelets and the representation of mutilated Sobolev functions

    Get PDF
    We show that ridgelets, a system introduced in [E. J. Candes, Appl. Comput. Harmon. Anal., 6(1999), pp. 197–218], are optimal to represent smooth multivariate functions that may exhibit linear singularities. For instance, let {u · x − b > 0} be an arbitrary hyperplane and consider the singular function f(x) = 1{u·x−b>0}g(x), where g is compactly supported with finite Sobolev L2 norm ||g||Hs, s > 0. The ridgelet coefficient sequence of such an object is as sparse as if f were without singularity, allowing optimal partial reconstructions. For instance, the n-term approximation obtained by keeping the terms corresponding to the n largest coefficients in the ridgelet series achieves a rate of approximation of order n−s/d; the presence of the singularity does not spoil the quality of the ridgelet approximation. This is unlike all systems currently in use, especially Fourier or wavelet representations

    What is...a Curvelet?

    Get PDF
    Energized by the success of wavelets, the last two decades saw the rapid development of a new field, computational harmonic analysis, which aims to develop new systems for effectively representing phenomena of scientific interest. The curvelet transform is a recent addition to the family of mathematical tools this community enthusiastically builds up. In short, this is a new multiscale transform with strong directional character in which elements are highly anisotropic at fine scales, with effective support shaped according to the parabolic scaling principle length^2 ~ width

    A* Orthogonal Matching Pursuit: Best-First Search for Compressed Sensing Signal Recovery

    Full text link
    Compressed sensing is a developing field aiming at reconstruction of sparse signals acquired in reduced dimensions, which make the recovery process under-determined. The required solution is the one with minimum 0\ell_0 norm due to sparsity, however it is not practical to solve the 0\ell_0 minimization problem. Commonly used techniques include 1\ell_1 minimization, such as Basis Pursuit (BP) and greedy pursuit algorithms such as Orthogonal Matching Pursuit (OMP) and Subspace Pursuit (SP). This manuscript proposes a novel semi-greedy recovery approach, namely A* Orthogonal Matching Pursuit (A*OMP). A*OMP performs A* search to look for the sparsest solution on a tree whose paths grow similar to the Orthogonal Matching Pursuit (OMP) algorithm. Paths on the tree are evaluated according to a cost function, which should compensate for different path lengths. For this purpose, three different auxiliary structures are defined, including novel dynamic ones. A*OMP also incorporates pruning techniques which enable practical applications of the algorithm. Moreover, the adjustable search parameters provide means for a complexity-accuracy trade-off. We demonstrate the reconstruction ability of the proposed scheme on both synthetically generated data and images using Gaussian and Bernoulli observation matrices, where A*OMP yields less reconstruction error and higher exact recovery frequency than BP, OMP and SP. Results also indicate that novel dynamic cost functions provide improved results as compared to a conventional choice.Comment: accepted for publication in Digital Signal Processin

    On Verifiable Sufficient Conditions for Sparse Signal Recovery via 1\ell_1 Minimization

    Full text link
    We propose novel necessary and sufficient conditions for a sensing matrix to be "ss-good" - to allow for exact 1\ell_1-recovery of sparse signals with ss nonzero entries when no measurement noise is present. Then we express the error bounds for imperfect 1\ell_1-recovery (nonzero measurement noise, nearly ss-sparse signal, near-optimal solution of the optimization problem yielding the 1\ell_1-recovery) in terms of the characteristics underlying these conditions. Further, we demonstrate (and this is the principal result of the paper) that these characteristics, although difficult to evaluate, lead to verifiable sufficient conditions for exact sparse 1\ell_1-recovery and to efficiently computable upper bounds on those ss for which a given sensing matrix is ss-good. We establish also instructive links between our approach and the basic concepts of the Compressed Sensing theory, like Restricted Isometry or Restricted Eigenvalue properties

    A fast and accurate first-order algorithm for compressed sensing

    Get PDF
    This paper introduces a new, fast and accurate algorithm for solving problems in the area of compressed sensing, and more generally, in the area of signal and image reconstruction from indirect measurements. This algorithm is inspired by recent progress in the development of novel first-order methods in convex optimization, most notably Nesterov’s smoothing technique. In particular, there is a crucial property thatmakes thesemethods extremely efficient for solving compressed sensing problems. Numerical experiments show the promising performance of our method to solve problems which involve the recovery of signals spanning a large dynamic range

    How well can we estimate a sparse vector?

    Get PDF
    The estimation of a sparse vector in the linear model is a fundamental problem in signal processing, statistics, and compressive sensing. This paper establishes a lower bound on the mean-squared error, which holds regardless of the sensing/design matrix being used and regardless of the estimation procedure. This lower bound very nearly matches the known upper bound one gets by taking a random projection of the sparse vector followed by an 1\ell_1 estimation procedure such as the Dantzig selector. In this sense, compressive sensing techniques cannot essentially be improved

    An Introduction To Compressive Sampling [A sensing/sampling paradigm that goes against the common knowledge in data acquisition]

    Get PDF
    This article surveys the theory of compressive sampling, also known as compressed sensing or CS, a novel sensing/sampling paradigm that goes against the common wisdom in data acquisition. CS theory asserts that one can recover certain signals and images from far fewer samples or measurements than traditional methods use. To make this possible, CS relies on two principles: sparsity, which pertains to the signals of interest, and incoherence, which pertains to the sensing modality. Our intent in this article is to overview the basic CS theory that emerged in the works [1]–[3], present the key mathematical ideas underlying this theory, and survey a couple of important results in the field. Our goal is to explain CS as plainly as possible, and so our article is mainly of a tutorial nature. One of the charms of this theory is that it draws from various subdisciplines within the applied mathematical sciences, most notably probability theory. In this review, we have decided to highlight this aspect and especially the fact that randomness can — perhaps surprisingly — lead to very effective sensing mechanisms. We will also discuss significant implications, explain why CS is a concrete protocol for sensing and compressing data simultaneously (thus the name), and conclude our tour by reviewing important applications
    corecore