5 research outputs found

    Efficient and Robust Recovery of Signal and Image in Impulsive Noise via β„“1βˆ’Ξ±β„“2\ell_1-\alpha \ell_2 Minimization

    Full text link
    In this paper, we consider the efficient and robust reconstruction of signals and images via β„“1βˆ’Ξ±β„“2Β (0<α≀1)\ell_{1}-\alpha \ell_{2}~(0<\alpha\leq 1) minimization in impulsive noise case. To achieve this goal, we introduce two new models: the β„“1βˆ’Ξ±β„“2\ell_1-\alpha\ell_2 minimization with β„“1\ell_1 constraint, which is called β„“1βˆ’Ξ±β„“2\ell_1-\alpha\ell_2-LAD, the β„“1βˆ’Ξ±β„“2\ell_1-\alpha\ell_2 minimization with Dantzig selector constraint, which is called β„“1βˆ’Ξ±β„“2\ell_1-\alpha\ell_2-DS. We first show that sparse signals or nearly sparse signals can be exactly or stably recovered via β„“1βˆ’Ξ±β„“2\ell_{1}-\alpha\ell_{2} minimization under some conditions based on the restricted 11-isometry property (β„“1\ell_1-RIP). Second, for β„“1βˆ’Ξ±β„“2\ell_1-\alpha\ell_2-LAD model, we introduce unconstrained β„“1βˆ’Ξ±β„“2\ell_1-\alpha\ell_2 minimization model denoting β„“1βˆ’Ξ±β„“2\ell_1-\alpha\ell_2-PLAD and propose β„“1βˆ’Ξ±β„“2\ell_1-\alpha\ell_2LA algorithm to solve the β„“1βˆ’Ξ±β„“2\ell_1-\alpha\ell_2-PLAD. Last, numerical experiments %on success rates of sparse signal recovery demonstrate that when the sensing matrix is ill-conditioned (i.e., the coherence of the matrix is larger than 0.99), the β„“1βˆ’Ξ±β„“2\ell_1-\alpha\ell_2LA method is better than the existing convex and non-convex compressed sensing solvers for the recovery of sparse signals. And for the magnetic resonance imaging (MRI) reconstruction with impulsive noise, we show that the β„“1βˆ’Ξ±β„“2\ell_1-\alpha\ell_2LA method has better performance than state-of-the-art methods via numerical experiments.Comment: arXiv admin note: text overlap with arXiv:1703.07952 by other author

    The l1-l2 minimization with rotation for sparse approximation in uncertainty quantification

    Full text link
    This paper proposes a combination of rotational compressive sensing with the l1-l2 minimization to estimate coefficients of generalized polynomial chaos (gPC) used in uncertainty quantification. In particular, we aim to identify a rotation matrix such that the gPC of a set of random variables after the rotation has a sparser representation. However, this rotational approach alters the underlying linear system to be solved, which makes finding the sparse coefficients much more difficult than the case without rotation. We further adopt the l1-l2 minimization that is more suited for such ill-posed problems in compressive sensing (CS) than the classic l1 approach. We conduct extensive experiments on standard gPC problem settings, showing superior performance of the proposed combination of rotation and l1-l2 minimization over the ones without rotation and with rotation but using the l1 minimization

    A projected gradient method for Ξ±β„“1βˆ’Ξ²β„“2\alpha\ell_{1}-\beta\ell_{2} sparsity regularization

    Full text link
    The non-convex Ξ±βˆ₯β‹…βˆ₯β„“1βˆ’Ξ²βˆ₯β‹…βˆ₯β„“2\alpha\|\cdot\|_{\ell_1}-\beta\| \cdot\|_{\ell_2} (Ξ±β‰₯Ξ²β‰₯0)(\alpha\ge\beta\geq0) regularization has attracted attention in the field of sparse recovery. One way to obtain a minimizer of this regularization is the ST-(Ξ±β„“1βˆ’Ξ²β„“2\alpha\ell_1-\beta\ell_2) algorithm which is similar to the classical iterative soft thresholding algorithm (ISTA). It is known that ISTA converges quite slowly, and a faster alternative to ISTA is the projected gradient (PG) method. However, the conventional PG method is limited to the classical β„“1\ell_1 sparsity regularization. In this paper, we present two accelerated alternatives to the ST-(Ξ±β„“1βˆ’Ξ²β„“2\alpha\ell_1-\beta\ell_2) algorithm by extending the PG method to the non-convex Ξ±β„“1βˆ’Ξ²β„“2\alpha\ell_1-\beta\ell_2 sparsity regularization. Moreover, we discuss a strategy to determine the radius RR of the β„“1\ell_1-ball constraint by Morozov's discrepancy principle. Numerical results are reported to illustrate the efficiency of the proposed approach.Comment: 30 pages; 8 figure

    The Dantzig selector: Recovery of Signal via β„“1βˆ’Ξ±β„“2\ell_1-\alpha \ell_2 Minimization

    Full text link
    In the paper, we proposed the Dantzig selector based on the β„“1βˆ’Ξ±β„“2\ell_{1}-\alpha \ell_{2}~(0<α≀1)(0< \alpha \leq1) minimization for the signal recovery. In the Dantzig selector, the constraint βˆ₯A⊀(bβˆ’Ax)βˆ₯βˆžβ‰€Ξ·\|{\bf A}^{\top}({\bf b}-{\bf A}{\bf x})\|_\infty \leq \eta for some small constant Ξ·>0\eta>0 means the columns of A{\bf A} has very weakly correlated with the error vector e=Axβˆ’b{\bf e}={\bf A}{\bf x}-{\bf b}. First, recovery guarantees based on the restricted isometry property (RIP) are established for signals. Next, we propose the effective algorithm to solve the proposed Dantzig selector. Last, we illustrate the proposed model and algorithm by extensive numerical experiments for the recovery of signals in the cases of Gaussian, impulsive and uniform noise. And the performance of the proposed Dantzig selector is better than that of the existing methods

    Ξ±β„“1βˆ’Ξ²β„“2\alpha\ell_{1}-\beta\ell_{2} sparsity regularization for nonlinear ill-posed problems

    Full text link
    In this paper, we consider the Ξ±βˆ₯β‹…βˆ₯β„“1βˆ’Ξ²βˆ₯β‹…βˆ₯β„“2\alpha\| \cdot\|_{\ell_1}-\beta\| \cdot\|_{\ell_2} sparsity regularization with parameter Ξ±β‰₯Ξ²β‰₯0\alpha\geq\beta\geq0 for nonlinear ill-posed inverse problems. We investigate the well-posedness of the regularization. Compared to the case where Ξ±>Ξ²β‰₯0\alpha>\beta\geq0, the results for the case Ξ±=Ξ²β‰₯0\alpha=\beta\geq0 are weaker due to the lack of coercivity and Radon-Riesz property of the regularization term. Under certain condition on the nonlinearity of FF, we prove that every minimizer of Ξ±βˆ₯β‹…βˆ₯β„“1βˆ’Ξ²βˆ₯β‹…βˆ₯β„“2 \alpha\| \cdot\|_{\ell_1}-\beta\| \cdot\|_{\ell_2} regularization is sparse. For the case Ξ±>Ξ²β‰₯0\alpha>\beta\geq0, if the exact solution is sparse, we derive convergence rate O(Ξ΄12)O(\delta^{\frac{1}{2}}) and O(Ξ΄)O(\delta) of the regularized solution under two commonly adopted conditions on the nonlinearity of FF, respectively. In particular, it is shown that the iterative soft thresholding algorithm can be utilized to solve the Ξ±βˆ₯β‹…βˆ₯β„“1βˆ’Ξ²βˆ₯β‹…βˆ₯β„“2 \alpha\| \cdot\|_{\ell_1}-\beta\| \cdot\|_{\ell_2} regularization problem for nonlinear ill-posed equations. Numerical results illustrate the efficiency of the proposed method.Comment: 33 pages, 4 figure
    corecore