15,457 research outputs found

    NESTA: A Fast and Accurate First-order Method for Sparse Recovery

    Get PDF
    Accurate signal recovery or image reconstruction from indirect and possibly undersampled data is a topic of considerable interest; for example, the literature in the recent field of compressed sensing is already quite immense. Inspired by recent breakthroughs in the development of novel first-order methods in convex optimization, most notably Nesterov's smoothing technique, this paper introduces a fast and accurate algorithm for solving common recovery problems in signal processing. In the spirit of Nesterov's work, one of the key ideas of this algorithm is a subtle averaging of sequences of iterates, which has been shown to improve the convergence properties of standard gradient-descent algorithms. This paper demonstrates that this approach is ideally suited for solving large-scale compressed sensing reconstruction problems as 1) it is computationally efficient, 2) it is accurate and returns solutions with several correct digits, 3) it is flexible and amenable to many kinds of reconstruction problems, and 4) it is robust in the sense that its excellent performance across a wide range of problems does not depend on the fine tuning of several parameters. Comprehensive numerical experiments on realistic signals exhibiting a large dynamic range show that this algorithm compares favorably with recently proposed state-of-the-art methods. We also apply the algorithm to solve other problems for which there are fewer alternatives, such as total-variation minimization, and convex programs seeking to minimize the l1 norm of Wx under constraints, in which W is not diagonal

    Joint Sparse Recovery Method for Compressed Sensing with Structured Dictionary Mismatches

    Full text link
    In traditional compressed sensing theory, the dictionary matrix is given a priori, whereas in real applications this matrix suffers from random noise and fluctuations. In this paper we consider a signal model where each column in the dictionary matrix is affected by a structured noise. This formulation is common in direction-of-arrival (DOA) estimation of off-grid targets, encountered in both radar systems and array processing. We propose to use joint sparse signal recovery to solve the compressed sensing problem with structured dictionary mismatches and also give an analytical performance bound on this joint sparse recovery. We show that, under mild conditions, the reconstruction error of the original sparse signal is bounded by both the sparsity and the noise level in the measurement model. Moreover, we implement fast first-order algorithms to speed up the computing process. Numerical examples demonstrate the good performance of the proposed algorithm, and also show that the joint-sparse recovery method yields a better reconstruction result than existing methods. By implementing the joint sparse recovery method, the accuracy and efficiency of DOA estimation are improved in both passive and active sensing cases.Comment: Submitted on Aug 27th, 2013(Revise on Feb 16th, 2014, Accepted on July 21th, 2014

    Sparse Low Rank Approximation of Potential Energy Surfaces with Applications in Estimation of Anharmonic Zero Point Energies and Frequencies

    Full text link
    We propose a method that exploits sparse representation of potential energy surfaces (PES) on a polynomial basis set selected by compressed sensing. The method is useful for studies involving large numbers of PES evaluations, such as the search for local minima, transition states, or integration. We apply this method for estimating zero point energies and frequencies of molecules using a three step approach. In the first step, we interpret the PES as a sparse tensor on polynomial basis and determine its entries by a compressed sensing based algorithm using only a few PES evaluations. Then, we implement a rank reduction strategy to compress this tensor in a suitable low-rank canonical tensor format using standard tensor compression tools. This allows representing a high dimensional PES as a small sum of products of one dimensional functions. Finally, a low dimensional Gauss-Hermite quadrature rule is used to integrate the product of sparse canonical low-rank representation of PES and Green's function in the second-order diagrammatic vibrational many-body Green's function theory (XVH2) for estimation of zero-point energies and frequencies. Numerical tests on molecules considered in this work suggest a more efficient scaling of computational cost with molecular size as compared to other methods

    A Deterministic Sub-linear Time Sparse Fourier Algorithm via Non-adaptive Compressed Sensing Methods

    Full text link
    We study the problem of estimating the best B term Fourier representation for a given frequency-sparse signal (i.e., vector) A\textbf{A} of length NBN \gg B. More explicitly, we investigate how to deterministically identify B of the largest magnitude frequencies of A^\hat{\textbf{A}}, and estimate their coefficients, in polynomial(B,logN)(B,\log N) time. Randomized sub-linear time algorithms which have a small (controllable) probability of failure for each processed signal exist for solving this problem. However, for failure intolerant applications such as those involving mission-critical hardware designed to process many signals over a long lifetime, deterministic algorithms with no probability of failure are highly desirable. In this paper we build on the deterministic Compressed Sensing results of Cormode and Muthukrishnan (CM) \cite{CMDetCS3,CMDetCS1,CMDetCS2} in order to develop the first known deterministic sub-linear time sparse Fourier Transform algorithm suitable for failure intolerant applications. Furthermore, in the process of developing our new Fourier algorithm, we present a simplified deterministic Compressed Sensing algorithm which improves on CM's algebraic compressibility results while simultaneously maintaining their results concerning exponential decay.Comment: 16 pages total, 10 in paper, 6 in appende

    Dynamic mode decomposition for compressive system identification

    Full text link
    Dynamic mode decomposition has emerged as a leading technique to identify spatiotemporal coherent structures from high-dimensional data, benefiting from a strong connection to nonlinear dynamical systems via the Koopman operator. In this work, we integrate and unify two recent innovations that extend DMD to systems with actuation [Proctor et al., 2016] and systems with heavily subsampled measurements [Brunton et al., 2015]. When combined, these methods yield a novel framework for compressive system identification [code is publicly available at: https://github.com/zhbai/cDMDc]. It is possible to identify a low-order model from limited input-output data and reconstruct the associated full-state dynamic modes with compressed sensing, adding interpretability to the state of the reduced-order model. Moreover, when full-state data is available, it is possible to dramatically accelerate downstream computations by first compressing the data. We demonstrate this unified framework on two model systems, investigating the effects of sensor noise, different types of measurements (e.g., point sensors, Gaussian random projections, etc.), compression ratios, and different choices of actuation (e.g., localized, broadband, etc.). In the first example, we explore this architecture on a test system with known low-rank dynamics and an artificially inflated state dimension. The second example consists of a real-world engineering application given by the fluid flow past a pitching airfoil at low Reynolds number. This example provides a challenging and realistic test-case for the proposed method, and results demonstrate that the dominant coherent structures are well characterized despite actuation and heavily subsampled data.Comment: 19 pages, 11 figure

    (k,q)-Compressed Sensing for dMRI with Joint Spatial-Angular Sparsity Prior

    Full text link
    Advanced diffusion magnetic resonance imaging (dMRI) techniques, like diffusion spectrum imaging (DSI) and high angular resolution diffusion imaging (HARDI), remain underutilized compared to diffusion tensor imaging because the scan times needed to produce accurate estimations of fiber orientation are significantly longer. To accelerate DSI and HARDI, recent methods from compressed sensing (CS) exploit a sparse underlying representation of the data in the spatial and angular domains to undersample in the respective k- and q-spaces. State-of-the-art frameworks, however, impose sparsity in the spatial and angular domains separately and involve the sum of the corresponding sparse regularizers. In contrast, we propose a unified (k,q)-CS formulation which imposes sparsity jointly in the spatial-angular domain to further increase sparsity of dMRI signals and reduce the required subsampling rate. To efficiently solve this large-scale global reconstruction problem, we introduce a novel adaptation of the FISTA algorithm that exploits dictionary separability. We show on phantom and real HARDI data that our approach achieves significantly more accurate signal reconstructions than the state of the art while sampling only 2-4% of the (k,q)-space, allowing for the potential of new levels of dMRI acceleration.Comment: To be published in the 2017 Computational Diffusion MRI Workshop of MICCA

    Compressed sensing reconstruction using Expectation Propagation

    Full text link
    Many interesting problems in fields ranging from telecommunications to computational biology can be formalized in terms of large underdetermined systems of linear equations with additional constraints or regularizers. One of the most studied ones, the Compressed Sensing problem (CS), consists in finding the solution with the smallest number of non-zero components of a given system of linear equations y=Fw\boldsymbol y = \mathbf{F} \boldsymbol{w} for known measurement vector y\boldsymbol{y} and sensing matrix F\mathbf{F}. Here, we will address the compressed sensing problem within a Bayesian inference framework where the sparsity constraint is remapped into a singular prior distribution (called Spike-and-Slab or Bernoulli-Gauss). Solution to the problem is attempted through the computation of marginal distributions via Expectation Propagation (EP), an iterative computational scheme originally developed in Statistical Physics. We will show that this strategy is comparatively more accurate than the alternatives in solving instances of CS generated from statistically correlated measurement matrices. For computational strategies based on the Bayesian framework such as variants of Belief Propagation, this is to be expected, as they implicitly rely on the hypothesis of statistical independence among the entries of the sensing matrix. Perhaps surprisingly, the method outperforms uniformly also all the other state-of-the-art methods in our tests.Comment: 20 pages, 6 figure

    A Tight Bound of Hard Thresholding

    Full text link
    This paper is concerned with the hard thresholding operator which sets all but the kk largest absolute elements of a vector to zero. We establish a {\em tight} bound to quantitatively characterize the deviation of the thresholded solution from a given signal. Our theoretical result is universal in the sense that it holds for all choices of parameters, and the underlying analysis depends only on fundamental arguments in mathematical optimization. We discuss the implications for two domains: Compressed Sensing. On account of the crucial estimate, we bridge the connection between the restricted isometry property (RIP) and the sparsity parameter for a vast volume of hard thresholding based algorithms, which renders an improvement on the RIP condition especially when the true sparsity is unknown. This suggests that in essence, many more kinds of sensing matrices or fewer measurements are admissible for the data acquisition procedure. Machine Learning. In terms of large-scale machine learning, a significant yet challenging problem is learning accurate sparse models in an efficient manner. In stark contrast to prior work that attempted the 1\ell_1-relaxation for promoting sparsity, we present a novel stochastic algorithm which performs hard thresholding in each iteration, hence ensuring such parsimonious solutions. Equipped with the developed bound, we prove the {\em global linear convergence} for a number of prevalent statistical models under mild assumptions, even though the problem turns out to be non-convex.Comment: V1 was submitted to COLT 2016. V2 fixes minor flaws, adds extra experiments and discusses time complexity, V3 has been accepted to JML

    Lorentzian Iterative Hard Thresholding: Robust Compressed Sensing with Prior Information

    Full text link
    Commonly employed reconstruction algorithms in compressed sensing (CS) use the L2L_2 norm as the metric for the residual error. However, it is well-known that least squares (LS) based estimators are highly sensitive to outliers present in the measurement vector leading to a poor performance when the noise no longer follows the Gaussian assumption but, instead, is better characterized by heavier-than-Gaussian tailed distributions. In this paper, we propose a robust iterative hard Thresholding (IHT) algorithm for reconstructing sparse signals in the presence of impulsive noise. To address this problem, we use a Lorentzian cost function instead of the L2L_2 cost function employed by the traditional IHT algorithm. We also modify the algorithm to incorporate prior signal information in the recovery process. Specifically, we study the case of CS with partially known support. The proposed algorithm is a fast method with computational load comparable to the LS based IHT, whilst having the advantage of robustness against heavy-tailed impulsive noise. Sufficient conditions for stability are studied and a reconstruction error bound is derived. We also derive sufficient conditions for stable sparse signal recovery with partially known support. Theoretical analysis shows that including prior support information relaxes the conditions for successful reconstruction. Simulation results demonstrate that the Lorentzian-based IHT algorithm significantly outperform commonly employed sparse reconstruction techniques in impulsive environments, while providing comparable performance in less demanding, light-tailed environments. Numerical results also demonstrate that the partially known support inclusion improves the performance of the proposed algorithm, thereby requiring fewer samples to yield an approximate reconstruction.Comment: 28 pages, 9 figures, accepted in IEEE Transactions on Signal Processin
    corecore