409 research outputs found

    Weighted β„“1-Minimization for Sparse Recovery under Arbitrary Prior Information

    Get PDF
    Weighted β„“1-minimization has been studied as a technique for the reconstruction of a sparse signal from compressively sampled measurements when prior information about the signal, in the form of a support estimate, is available. In this work, we study the recovery conditions and the associated recovery guarantees of weighted β„“1-minimization when arbitrarily many distinct weights are permitted. For example, such a setup might be used when one has multiple estimates for the support of a signal, and these estimates have varying degrees of accuracy. Our analysis yields an extension to existing works that assume only a single constant weight is used. We include numerical experiments, with both synthetic signals and real video data, that demonstrate the benefits of allowing non-uniform weights in the reconstruction procedure

    Universally Elevating the Phase Transition Performance of Compressed Sensing: Non-Isometric Matrices are Not Necessarily Bad Matrices

    Full text link
    In compressed sensing problems, β„“1\ell_1 minimization or Basis Pursuit was known to have the best provable phase transition performance of recoverable sparsity among polynomial-time algorithms. It is of great theoretical and practical interest to find alternative polynomial-time algorithms which perform better than β„“1\ell_1 minimization. \cite{Icassp reweighted l_1}, \cite{Isit reweighted l_1}, \cite{XuScaingLaw} and \cite{iterativereweightedjournal} have shown that a two-stage re-weighted β„“1\ell_1 minimization algorithm can boost the phase transition performance for signals whose nonzero elements follow an amplitude probability density function (pdf) f(β‹…)f(\cdot) whose tt-th derivative ft(0)β‰ 0f^{t}(0) \neq 0 for some integer tβ‰₯0t \geq 0. However, for signals whose nonzero elements are strictly suspended from zero in distribution (for example, constant-modulus, only taking values `+d+d' or `βˆ’d-d' for some nonzero real number dd), no polynomial-time signal recovery algorithms were known to provide better phase transition performance than plain β„“1\ell_1 minimization, especially for dense sensing matrices. In this paper, we show that a polynomial-time algorithm can universally elevate the phase-transition performance of compressed sensing, compared with β„“1\ell_1 minimization, even for signals with constant-modulus nonzero elements. Contrary to conventional wisdoms that compressed sensing matrices are desired to be isometric, we show that non-isometric matrices are not necessarily bad sensing matrices. In this paper, we also provide a framework for recovering sparse signals when sensing matrices are not isometric.Comment: 6pages, 2 figures. arXiv admin note: substantial text overlap with arXiv:1010.2236, arXiv:1004.040

    Weighted β„“1\ell_1 Minimization for Sparse Recovery with Prior Information

    Full text link
    In this paper we study the compressed sensing problem of recovering a sparse signal from a system of underdetermined linear equations when we have prior information about the probability of each entry of the unknown signal being nonzero. In particular, we focus on a model where the entries of the unknown vector fall into two sets, each with a different probability of being nonzero. We propose a weighted β„“1\ell_1 minimization recovery algorithm and analyze its performance using a Grassman angle approach. We compute explicitly the relationship between the system parameters (the weights, the number of measurements, the size of the two sets, the probabilities of being non-zero) so that an iid random Gaussian measurement matrix along with weighted β„“1\ell_1 minimization recovers almost all such sparse signals with overwhelming probability as the problem dimension increases. This allows us to compute the optimal weights. We also provide simulations to demonstrate the advantages of the method over conventional β„“1\ell_1 optimization.Comment: 5 Pages, Submitted to ISIT 200

    A sharp recovery condition for sparse signals with partial support information via orthogonal matching pursuit

    Full text link
    This paper considers the exact recovery of kk-sparse signals in the noiseless setting and support recovery in the noisy case when some prior information on the support of the signals is available. This prior support consists of two parts. One part is a subset of the true support and another part is outside of the true support. For kk-sparse signals x\mathbf{x} with the prior support which is composed of gg true indices and bb wrong indices, we show that if the restricted isometry constant (RIC) Ξ΄k+b+1\delta_{k+b+1} of the sensing matrix A\mathbf{A} satisfies \begin{eqnarray*} \delta_{k+b+1}<\frac{1}{\sqrt{k-g+1}}, \end{eqnarray*} then orthogonal matching pursuit (OMP) algorithm can perfectly recover the signals x\mathbf{x} from y=Ax\mathbf{y}=\mathbf{Ax} in kβˆ’gk-g iterations. Moreover, we show the above sufficient condition on the RIC is sharp. In the noisy case, we achieve the exact recovery of the remainder support (the part of the true support outside of the prior support) for the kk-sparse signals x\mathbf{x} from y=Ax+v\mathbf{y}=\mathbf{Ax}+\mathbf{v} under appropriate conditions. For the remainder support recovery, we also obtain a necessary condition based on the minimum magnitude of partial nonzero elements of the signals x\mathbf{x}

    Improved Sparse Recovery Thresholds with Two-Step Reweighted β„“1\ell_1 Minimization

    Full text link
    It is well known that β„“1\ell_1 minimization can be used to recover sufficiently sparse unknown signals from compressed linear measurements. In fact, exact thresholds on the sparsity, as a function of the ratio between the system dimensions, so that with high probability almost all sparse signals can be recovered from iid Gaussian measurements, have been computed and are referred to as "weak thresholds" \cite{D}. In this paper, we introduce a reweighted β„“1\ell_1 recovery algorithm composed of two steps: a standard β„“1\ell_1 minimization step to identify a set of entries where the signal is likely to reside, and a weighted β„“1\ell_1 minimization step where entries outside this set are penalized. For signals where the non-sparse component has iid Gaussian entries, we prove a "strict" improvement in the weak recovery threshold. Simulations suggest that the improvement can be quite impressive-over 20% in the example we consider.Comment: accepted in ISIT 201

    Recursive Recovery of Sparse Signal Sequences from Compressive Measurements: A Review

    Full text link
    In this article, we review the literature on design and analysis of recursive algorithms for reconstructing a time sequence of sparse signals from compressive measurements. The signals are assumed to be sparse in some transform domain or in some dictionary. Their sparsity patterns can change with time, although, in many practical applications, the changes are gradual. An important class of applications where this problem occurs is dynamic projection imaging, e.g., dynamic magnetic resonance imaging (MRI) for real-time medical applications such as interventional radiology, or dynamic computed tomography.Comment: To appear in IEEE Trans. Signal Processin

    Efficient Spectrum Availability Information Recovery for Wideband DSA Networks: A Weighted Compressive Sampling Approach

    Full text link
    Compressive sampling has great potential for making wideband spectrum sensing possible at sub-Nyquist sampling rates. As a result, there have recently been research efforts that leverage compressive sampling to enable efficient wideband spectrum sensing. These efforts consider homogenous wideband spectrum, where all bands are assumed to have similar PU traffic characteristics. In practice, however, wideband spectrum is not homogeneous, in that different spectrum bands could present different PU occupancy patterns. In fact, the nature of spectrum assignment, in which applications of similar types are often assigned bands within the same block, dictates that wideband spectrum is indeed heterogeneous. In this paper, we consider heterogeneous wideband spectrum, and exploit its inherent, block-like structure to design efficient compressive spectrum sensing techniques that are well suited for heterogeneous wideband spectrum. We propose a weighted β„“1βˆ’\ell_1-minimization sensing information recovery algorithm that achieves more stable recovery than that achieved by existing approaches while accounting for the variations of spectrum occupancy across both the time and frequency dimensions. In addition, we show that our proposed algorithm requires a lesser number of sensing measurements when compared to the state-of-the-art approaches

    Sliced-Inverse-Regression-Aided Rotated Compressive Sensing Method for Uncertainty Quantification

    Full text link
    Compressive-sensing-based uncertainty quantification methods have become a pow- erful tool for problems with limited data. In this work, we use the sliced inverse regression (SIR) method to provide an initial guess for the alternating direction method, which is used to en- hance sparsity of the Hermite polynomial expansion of stochastic quantity of interest. The sparsity improvement increases both the efficiency and accuracy of the compressive-sensing- based uncertainty quantification method. We demonstrate that the initial guess from SIR is more suitable for cases when the available data are limited (Algorithm 4). We also propose another algorithm (Algorithm 5) that performs dimension reduction first with SIR. Then it constructs a Hermite polynomial expansion of the reduced model. This method affords the ability to approximate the statistics accurately with even less available data. Both methods are non-intrusive and require no a priori information of the sparsity of the system. The effec- tiveness of these two methods (Algorithms 4 and 5) are demonstrated using problems with up to 500 random dimensions.Comment: In section 4, numerical examples 3-5, replaced the mean of the error with the quantiles and mean of the error. Added section 4.6 to compare different method

    Compressed sensing for longitudinal MRI: An adaptive-weighted approach

    Full text link
    Purpose: Repeated brain MRI scans are performed in many clinical scenarios, such as follow up of patients with tumors and therapy response assessment. In this paper, the authors show an approach to utilize former scans of the patient for the acceleration of repeated MRI scans. Methods: The proposed approach utilizes the possible similarity of the repeated scans in longitudinal MRI studies. Since similarity is not guaranteed, sampling and reconstruction are adjusted during acquisition to match the actual similarity between the scans. The baseline MR scan is utilized both in the sampling stage, via adaptive sampling, and in the reconstruction stage, with weighted reconstruction. In adaptive sampling, k-space sampling locations are optimized during acquisition. Weighted reconstruction uses the locations of the nonzero coefficients in the sparse domains as a prior in the recovery process. The approach was tested on 2D and 3D MRI scans of patients with brain tumors. Results: The longitudinal adaptive CS MRI (LACS-MRI) scheme provides reconstruction quality which outperforms other CS-based approaches for rapid MRI. Examples are shown on patients with brain tumors and demonstrate improved spatial resolution. Compared with data sampled at Nyquist rate, LACS-MRI exhibits Signal-to-Error Ratio (SER) of 24.8dB with undersampling factor of 16.6 in 3D MRI. Conclusions: The authors have presented a novel method for image reconstruction utilizing similarity of scans in longitudinal MRI studies, where possible. The proposed approach can play a major part and significantly reduce scanning time in many applications that consist of disease follow-up and monitoring of longitudinal changes in brain MRI

    Enhancing Sparsity of Hermite Polynomial Expansions by Iterative Rotations

    Full text link
    Compressive sensing has become a powerful addition to uncertainty quantification in recent years. This paper identifies new bases for random variables through linear mappings such that the representation of the quantity of interest is more sparse with new basis functions associated with the new random variables. This sparsity increases both the efficiency and accuracy of the compressive sensing-based uncertainty quantification method. Specifically, we consider rotation-based linear mappings which are determined iteratively for Hermite polynomial expansions. We demonstrate the effectiveness of the new method with applications in solving stochastic partial differential equations and high-dimensional (O(100)\mathcal{O}(100)) problems
    • …
    corecore