105 research outputs found

    Augmented ℓ1 and nuclear-norm models with a globally linearly convergent algorithm

    Get PDF
    Abstract. This paper studies the long-existing idea of adding a nice smooth function to “smooth ” a nondifferentiable objective function in the context of sparse optimization, in particular, the minimization of ‖x‖1 + 1 2α ‖x‖22,wherexis a vector, as well as the minimization of ‖X‖ ∗ + 1 2α ‖X‖2F,whereX is a matrix and ‖X‖ ∗ and ‖X‖F are the nuclear and Frobenius norms of X, respectively. We show that they let sparse vectors and low-rank matrices be efficiently recovered. In particular, they enjoy exact and stable recovery guarantees similar to those known for the minimization of ‖x‖1 and ‖X‖∗ under the conditions on the sensing operator such as its null-space property, restricted isometry property (RIP), spherical section property, or “RIPless ” property. To recover a (nearly) sparse vector x 0, minimizing ‖x‖1 + 1 2α ‖x‖22 returns (nearly) the same solution as minimizing ‖x‖1 whenever α ≥ 10‖x 0 ‖∞. The same relation also holds between minimizing ‖X‖ ∗ + 1 2α ‖X‖2F and minimizing ‖X‖ ∗ for recovering a (nearly) low-rank matrix X 0 if α ≥ 10‖X 0 ‖2. Furthermore, we show that the linearized Bregman algorithm, as well as its two fast variants, for minimizing ‖x‖1 + 1 2α ‖x‖2 2 subject to Ax = b enjoys global linear convergence as long as a nonzero solution exists, and we give an explicit rate of convergence. The convergence property does not require a sparse solution or any properties on A. To the best of our knowledge, this is the best known global convergence result for first-order sparse optimization algorithms

    Nonuniform Sparse Recovery with Subgaussian Matrices

    Full text link
    Compressive sensing predicts that sufficiently sparse vectors can be recovered from highly incomplete information. Efficient recovery methods such as 1\ell_1-minimization find the sparsest solution to certain systems of equations. Random matrices have become a popular choice for the measurement matrix. Indeed, near-optimal uniform recovery results have been shown for such matrices. In this note we focus on nonuniform recovery using Gaussian random matrices and 1\ell_1-minimization. We provide a condition on the number of samples in terms of the sparsity and the signal length which guarantees that a fixed sparse signal can be recovered with a random draw of the matrix using 1\ell_1-minimization. The constant 2 in the condition is optimal, and the proof is rather short compared to a similar result due to Donoho and Tanner

    Variable density sampling based on physically plausible gradient waveform. Application to 3D MRI angiography

    Get PDF
    Performing k-space variable density sampling is a popular way of reducing scanning time in Magnetic Resonance Imaging (MRI). Unfortunately, given a sampling trajectory, it is not clear how to traverse it using gradient waveforms. In this paper, we actually show that existing methods [1, 2] can yield large traversal time if the trajectory contains high curvature areas. Therefore, we consider here a new method for gradient waveform design which is based on the projection of unrealistic initial trajectory onto the set of hardware constraints. Next, we show on realistic simulations that this algorithm allows implementing variable density trajectories resulting from the piecewise linear solution of the Travelling Salesman Problem in a reasonable time. Finally, we demonstrate the application of this approach to 2D MRI reconstruction and 3D angiography in the mouse brain.Comment: IEEE International Symposium on Biomedical Imaging (ISBI), Apr 2015, New-York, United State

    A Short Tour of Compressive Sensing

    Get PDF
    corecore