5 research outputs found
Efficient and Robust Recovery of Signal and Image in Impulsive Noise via Minimization
In this paper, we consider the efficient and robust reconstruction of signals
and images
via minimization in impulsive
noise case.
To achieve this goal, we introduce two new models: the
minimization with constraint, which is called
-LAD, the minimization with Dantzig
selector constraint, which is called -DS.
We first show that sparse signals or nearly sparse signals can be exactly or
stably recovered via minimization under some
conditions based on the restricted -isometry property (-RIP).
Second, for -LAD model, we introduce unconstrained
minimization model denoting -PLAD
and propose LA algorithm to solve the
-PLAD.
Last, numerical experiments %on success rates of sparse signal recovery
demonstrate that when the sensing matrix is ill-conditioned (i.e., the
coherence of
the matrix is larger than 0.99), the LA method
is better than the existing convex and non-convex compressed sensing solvers
for the recovery of sparse signals. And for the magnetic resonance imaging
(MRI) reconstruction with impulsive noise, we show that
the LA method has better performance than
state-of-the-art methods via numerical experiments.Comment: arXiv admin note: text overlap with arXiv:1703.07952 by other author
The l1-l2 minimization with rotation for sparse approximation in uncertainty quantification
This paper proposes a combination of rotational compressive sensing with the
l1-l2 minimization to estimate coefficients of generalized polynomial chaos
(gPC) used in uncertainty quantification. In particular, we aim to identify a
rotation matrix such that the gPC of a set of random variables after the
rotation has a sparser representation. However, this rotational approach alters
the underlying linear system to be solved, which makes finding the sparse
coefficients much more difficult than the case without rotation. We further
adopt the l1-l2 minimization that is more suited for such ill-posed problems in
compressive sensing (CS) than the classic l1 approach. We conduct extensive
experiments on standard gPC problem settings, showing superior performance of
the proposed combination of rotation and l1-l2 minimization over the ones
without rotation and with rotation but using the l1 minimization
A projected gradient method for sparsity regularization
The non-convex
regularization has attracted attention in the field of
sparse recovery. One way to obtain a minimizer of this regularization is the
ST-() algorithm which is similar to the classical
iterative soft thresholding algorithm (ISTA). It is known that ISTA converges
quite slowly, and a faster alternative to ISTA is the projected gradient (PG)
method. However, the conventional PG method is limited to the classical
sparsity regularization. In this paper, we present two accelerated
alternatives to the ST-() algorithm by extending the
PG method to the non-convex sparsity regularization.
Moreover, we discuss a strategy to determine the radius of the
-ball constraint by Morozov's discrepancy principle. Numerical results
are reported to illustrate the efficiency of the proposed approach.Comment: 30 pages; 8 figure
The Dantzig selector: Recovery of Signal via Minimization
In the paper, we proposed the Dantzig selector based on the ~ minimization for the signal recovery. In the
Dantzig selector, the constraint for some small constant means the columns of
has very weakly correlated with the error vector . First, recovery guarantees based on the restricted isometry
property (RIP) are established for signals. Next, we propose the effective
algorithm to solve the proposed Dantzig selector. Last, we illustrate the
proposed model and algorithm by extensive numerical experiments for the
recovery of signals in the cases of Gaussian, impulsive and uniform noise. And
the performance of the proposed Dantzig selector is better than that of the
existing methods
sparsity regularization for nonlinear ill-posed problems
In this paper, we consider the sparsity regularization with parameter
for nonlinear ill-posed inverse problems. We investigate the well-posedness of
the regularization. Compared to the case where , the results
for the case are weaker due to the lack of coercivity and
Radon-Riesz property of the regularization term. Under certain condition on the
nonlinearity of , we prove that every minimizer of regularization is sparse. For the
case , if the exact solution is sparse, we derive
convergence rate and of the regularized
solution under two commonly adopted conditions on the nonlinearity of ,
respectively. In particular, it is shown that the iterative soft thresholding
algorithm can be utilized to solve the regularization problem for nonlinear ill-posed equations.
Numerical results illustrate the efficiency of the proposed method.Comment: 33 pages, 4 figure