2,623 research outputs found

    Measurement Bounds for Sparse Signal Ensembles via Graphical Models

    Full text link
    In compressive sensing, a small collection of linear projections of a sparse signal contains enough information to permit signal recovery. Distributed compressive sensing (DCS) extends this framework by defining ensemble sparsity models, allowing a correlated ensemble of sparse signals to be jointly recovered from a collection of separately acquired compressive measurements. In this paper, we introduce a framework for modeling sparse signal ensembles that quantifies the intra- and inter-signal dependencies within and among the signals. This framework is based on a novel bipartite graph representation that links the sparse signal coefficients with the measurements obtained for each signal. Using our framework, we provide fundamental bounds on the number of noiseless measurements that each sensor must collect to ensure that the signals are jointly recoverable.Comment: 11 pages, 2 figure

    Construction of a Large Class of Deterministic Sensing Matrices that Satisfy a Statistical Isometry Property

    Full text link
    Compressed Sensing aims to capture attributes of kk-sparse signals using very few measurements. In the standard Compressed Sensing paradigm, the \m\times \n measurement matrix \A is required to act as a near isometry on the set of all kk-sparse signals (Restricted Isometry Property or RIP). Although it is known that certain probabilistic processes generate \m \times \n matrices that satisfy RIP with high probability, there is no practical algorithm for verifying whether a given sensing matrix \A has this property, crucial for the feasibility of the standard recovery algorithms. In contrast this paper provides simple criteria that guarantee that a deterministic sensing matrix satisfying these criteria acts as a near isometry on an overwhelming majority of kk-sparse signals; in particular, most such signals have a unique representation in the measurement domain. Probability still plays a critical role, but it enters the signal model rather than the construction of the sensing matrix. We require the columns of the sensing matrix to form a group under pointwise multiplication. The construction allows recovery methods for which the expected performance is sub-linear in \n, and only quadratic in \m; the focus on expected performance is more typical of mainstream signal processing than the worst-case analysis that prevails in standard Compressed Sensing. Our framework encompasses many families of deterministic sensing matrices, including those formed from discrete chirps, Delsarte-Goethals codes, and extended BCH codes.Comment: 16 Pages, 2 figures, to appear in IEEE Journal of Selected Topics in Signal Processing, the special issue on Compressed Sensin

    Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit

    Get PDF
    This paper demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with mm nonzero entries in dimension dd given rmO(mlnd) {rm O}(m ln d) random linear measurements of that signal. This is a massive improvement over previous results, which require rmO(m2){rm O}(m^{2}) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems

    Sharp Time--Data Tradeoffs for Linear Inverse Problems

    Full text link
    In this paper we characterize sharp time-data tradeoffs for optimization problems used for solving linear inverse problems. We focus on the minimization of a least-squares objective subject to a constraint defined as the sub-level set of a penalty function. We present a unified convergence analysis of the gradient projection algorithm applied to such problems. We sharply characterize the convergence rate associated with a wide variety of random measurement ensembles in terms of the number of measurements and structural complexity of the signal with respect to the chosen penalty function. The results apply to both convex and nonconvex constraints, demonstrating that a linear convergence rate is attainable even though the least squares objective is not strongly convex in these settings. When specialized to Gaussian measurements our results show that such linear convergence occurs when the number of measurements is merely 4 times the minimal number required to recover the desired signal at all (a.k.a. the phase transition). We also achieve a slower but geometric rate of convergence precisely above the phase transition point. Extensive numerical results suggest that the derived rates exactly match the empirical performance

    Limits on Sparse Data Acquisition: RIC Analysis of Finite Gaussian Matrices

    Full text link
    One of the key issues in the acquisition of sparse data by means of compressed sensing (CS) is the design of the measurement matrix. Gaussian matrices have been proven to be information-theoretically optimal in terms of minimizing the required number of measurements for sparse recovery. In this paper we provide a new approach for the analysis of the restricted isometry constant (RIC) of finite dimensional Gaussian measurement matrices. The proposed method relies on the exact distributions of the extreme eigenvalues for Wishart matrices. First, we derive the probability that the restricted isometry property is satisfied for a given sufficient recovery condition on the RIC, and propose a probabilistic framework to study both the symmetric and asymmetric RICs. Then, we analyze the recovery of compressible signals in noise through the statistical characterization of stability and robustness. The presented framework determines limits on various sparse recovery algorithms for finite size problems. In particular, it provides a tight lower bound on the maximum sparsity order of the acquired data allowing signal recovery with a given target probability. Also, we derive simple approximations for the RICs based on the Tracy-Widom distribution.Comment: 11 pages, 6 figures, accepted for publication in IEEE transactions on information theor

    Simultaneously Structured Models with Application to Sparse and Low-rank Matrices

    Get PDF
    The topic of recovery of a structured model given a small number of linear observations has been well-studied in recent years. Examples include recovering sparse or group-sparse vectors, low-rank matrices, and the sum of sparse and low-rank matrices, among others. In various applications in signal processing and machine learning, the model of interest is known to be structured in several ways at the same time, for example, a matrix that is simultaneously sparse and low-rank. Often norms that promote each individual structure are known, and allow for recovery using an order-wise optimal number of measurements (e.g., â„“1\ell_1 norm for sparsity, nuclear norm for matrix rank). Hence, it is reasonable to minimize a combination of such norms. We show that, surprisingly, if we use multi-objective optimization with these norms, then we can do no better, order-wise, than an algorithm that exploits only one of the present structures. This result suggests that to fully exploit the multiple structures, we need an entirely new convex relaxation, i.e. not one that is a function of the convex relaxations used for each structure. We then specialize our results to the case of sparse and low-rank matrices. We show that a nonconvex formulation of the problem can recover the model from very few measurements, which is on the order of the degrees of freedom of the matrix, whereas the convex problem obtained from a combination of the â„“1\ell_1 and nuclear norms requires many more measurements. This proves an order-wise gap between the performance of the convex and nonconvex recovery problems in this case. Our framework applies to arbitrary structure-inducing norms as well as to a wide range of measurement ensembles. This allows us to give performance bounds for problems such as sparse phase retrieval and low-rank tensor completion.Comment: 38 pages, 9 figure
    • …
    corecore