15,723 research outputs found
Structured FISTA for Image Restoration
In this paper, we propose an efficient numerical scheme for solving some
large scale ill-posed linear inverse problems arising from image restoration.
In order to accelerate the computation, two different hidden structures are
exploited. First, the coefficient matrix is approximated as the sum of a small
number of Kronecker products. This procedure not only introduces one more level
of parallelism into the computation but also enables the usage of
computationally intensive matrix-matrix multiplications in the subsequent
optimization procedure. We then derive the corresponding Tikhonov regularized
minimization model and extend the fast iterative shrinkage-thresholding
algorithm (FISTA) to solve the resulting optimization problem. Since the
matrices appearing in the Kronecker product approximation are all structured
matrices (Toeplitz, Hankel, etc.), we can further exploit their fast
matrix-vector multiplication algorithms at each iteration. The proposed
algorithm is thus called structured fast iterative shrinkage-thresholding
algorithm (sFISTA). In particular, we show that the approximation error
introduced by sFISTA is well under control and sFISTA can reach the same image
restoration accuracy level as FISTA. Finally, both the theoretical complexity
analysis and some numerical results are provided to demonstrate the efficiency
of sFISTA
Subspace correction methods for total variation and minimization
This paper is concerned with the numerical minimization of energy functionals
in Hilbert spaces involving convex constraints coinciding with a semi-norm for
a subspace. The optimization is realized by alternating minimizations of the
functional on a sequence of orthogonal subspaces. On each subspace an iterative
proximity-map algorithm is implemented via \emph{oblique thresholding}, which
is the main new tool introduced in this work. We provide convergence conditions
for the algorithm in order to compute minimizers of the target energy.
Analogous results are derived for a parallel variant of the algorithm.
Applications are presented in domain decomposition methods for singular
elliptic PDE's arising in total variation minimization and in accelerated
sparse recovery algorithms based on -minimization. We include numerical
examples which show efficient solutions to classical problems in signal and
image processing.Comment: 33 page
Learning optimal nonlinearities for iterative thresholding algorithms
Iterative shrinkage/thresholding algorithm (ISTA) is a well-studied method
for finding sparse solutions to ill-posed inverse problems. In this letter, we
present a data-driven scheme for learning optimal thresholding functions for
ISTA. The proposed scheme is obtained by relating iterations of ISTA to layers
of a simple deep neural network (DNN) and developing a corresponding error
backpropagation algorithm that allows to fine-tune the thresholding functions.
Simulations on sparse statistical signals illustrate potential gains in
estimation quality due to the proposed data adaptive ISTA
Transformed Schatten-1 Iterative Thresholding Algorithms for Low Rank Matrix Completion
We study a non-convex low-rank promoting penalty function, the transformed
Schatten-1 (TS1), and its applications in matrix completion. The TS1 penalty,
as a matrix quasi-norm defined on its singular values, interpolates the rank
and the nuclear norm through a nonnegative parameter a. We consider the
unconstrained TS1 regularized low-rank matrix recovery problem and develop a
fixed point representation for its global minimizer. The TS1 thresholding
functions are in closed analytical form for all parameter values. The TS1
threshold values differ in subcritical (supercritical) parameter regime where
the TS1 threshold functions are continuous (discontinuous). We propose TS1
iterative thresholding algorithms and compare them with some state-of-the-art
algorithms on matrix completion test problems. For problems with known rank, a
fully adaptive TS1 iterative thresholding algorithm consistently performs the
best under different conditions with ground truth matrix being multivariate
Gaussian at varying covariance. For problems with unknown rank, TS1 algorithms
with an additional rank estimation procedure approach the level of IRucL-q
which is an iterative reweighted algorithm, non-convex in nature and best in
performance
Group-based Sparse Representation for Image Compressive Sensing Reconstruction with Non-Convex Regularization
Patch-based sparse representation modeling has shown great potential in image
compressive sensing (CS) reconstruction. However, this model usually suffers
from some limits, such as dictionary learning with great computational
complexity, neglecting the relationship among similar patches. In this paper, a
group-based sparse representation method with non-convex regularization
(GSR-NCR) for image CS reconstruction is proposed. In GSR-NCR, the local
sparsity and nonlocal self-similarity of images is simultaneously considered in
a unified framework. Different from the previous methods based on
sparsity-promoting convex regularization, we extend the non-convex weighted Lp
(0 < p < 1) penalty function on group sparse coefficients of the data matrix,
rather than conventional L1-based regularization. To reduce the computational
complexity, instead of learning the dictionary with a high computational
complexity from natural images, we learn the principle component analysis (PCA)
based dictionary for each group. Moreover, to make the proposed scheme
tractable and robust, we have developed an efficient iterative
shrinkage/thresholding algorithm to solve the non-convex optimization problem.
Experimental results demonstrate that the proposed method outperforms many
state-of-the-art techniques for image CS reconstruction
A survey of sparse representation: algorithms and applications
Sparse representation has attracted much attention from researchers in fields
of signal processing, image processing, computer vision and pattern
recognition. Sparse representation also has a good reputation in both
theoretical research and practical applications. Many different algorithms have
been proposed for sparse representation. The main purpose of this article is to
provide a comprehensive study and an updated review on sparse representation
and to supply a guidance for researchers. The taxonomy of sparse representation
methods can be studied from various viewpoints. For example, in terms of
different norm minimizations used in sparsity constraints, the methods can be
roughly categorized into five groups: sparse representation with -norm
minimization, sparse representation with -norm (0p1) minimization,
sparse representation with -norm minimization and sparse representation
with -norm minimization. In this paper, a comprehensive overview of
sparse representation is provided. The available sparse representation
algorithms can also be empirically categorized into four groups: greedy
strategy approximation, constrained optimization, proximity algorithm-based
optimization, and homotopy algorithm-based sparse representation. The
rationales of different algorithms in each category are analyzed and a wide
range of sparse representation applications are summarized, which could
sufficiently reveal the potential nature of the sparse representation theory.
Specifically, an experimentally comparative study of these sparse
representation algorithms was presented. The Matlab code used in this paper can
be available at: http://www.yongxu.org/lunwen.html.Comment: Published on IEEE Access, Vol. 3, pp. 490-530, 201
Sparse Signal Estimation by Maximally Sparse Convex Optimization
This paper addresses the problem of sparsity penalized least squares for
applications in sparse signal processing, e.g. sparse deconvolution. This paper
aims to induce sparsity more strongly than L1 norm regularization, while
avoiding non-convex optimization. For this purpose, this paper describes the
design and use of non-convex penalty functions (regularizers) constrained so as
to ensure the convexity of the total cost function, F, to be minimized. The
method is based on parametric penalty functions, the parameters of which are
constrained to ensure convexity of F. It is shown that optimal parameters can
be obtained by semidefinite programming (SDP). This maximally sparse convex
(MSC) approach yields maximally non-convex sparsity-inducing penalty functions
constrained such that the total cost function, F, is convex. It is demonstrated
that iterative MSC (IMSC) can yield solutions substantially more sparse than
the standard convex sparsity-inducing approach, i.e., L1 norm minimization.Comment: 13 pages, 9 figure
Robust Matrix Completion via Maximum Correntropy Criterion and Half Quadratic Optimization
Robust matrix completion aims to recover a low-rank matrix from a subset of
noisy entries perturbed by complex noises, where traditional methods for matrix
completion may perform poorly due to utilizing error norm in
optimization. In this paper, we propose a novel and fast robust matrix
completion method based on maximum correntropy criterion (MCC). The correntropy
based error measure is utilized instead of using -based error norm to
improve the robustness to noises. Using the half-quadratic optimization
technique, the correntropy based optimization can be transformed to a weighted
matrix factorization problem. Then, two efficient algorithms are derived,
including alternating minimization based algorithm and alternating gradient
descend based algorithm. The proposed algorithms do not need to calculate
singular value decomposition (SVD) at each iteration. Further, the adaptive
kernel selection strategy is proposed to accelerate the convergence speed as
well as improve the performance. Comparison with existing robust matrix
completion algorithms is provided by simulations, showing that the new methods
can achieve better performance than existing state-of-the-art algorithms
A general framework for solving convex optimization problems involving the sum of three convex functions
In this paper, we consider solving a class of convex optimization problem
which minimizes the sum of three convex functions , where
is differentiable with a Lipschitz continuous gradient, and
have a closed-form expression of their proximity operators and is a
bounded linear operator. This type of optimization problem has wide application
in signal recovery and image processing. To make full use of the
differentiability function in the optimization problem, we take advantage of
two operator splitting methods: the forward-backward splitting method and the
three operator splitting method. In the iteration scheme derived from the two
operator splitting methods, we need to compute the proximity operator of and , respectively. Although these proximity operators do
not have a closed-form solution in general, they can be solved very
efficiently. We mainly employ two different approaches to solve these proximity
operators: one is dual and the other is primal-dual. Following this way, we
fortunately find that three existing iterative algorithms including Condat and
Vu algorithm, primal-dual fixed point (PDFP) algorithm and primal-dual three
operator (PD3O) algorithm are a special case of our proposed iterative
algorithms. Moreover, we discover a new kind of iterative algorithm to solve
the considered optimization problem, which is not covered by the existing ones.
Under mild conditions, we prove the convergence of the proposed iterative
algorithms. Numerical experiments applied on fused Lasso problem, constrained
total variation regularization in computed tomography (CT) image reconstruction
and low-rank total variation image super-resolution problem demonstrate the
effectiveness and efficiency of the proposed iterative algorithms.Comment: 37 pages, 10 figure
Minimum -Rank Approximation via Iterative Hard Thresholding
The problem of recovering a low -rank tensor is an extension of sparse
recovery problem from the low dimensional space (matrix space) to the high
dimensional space (tensor space) and has many applications in computer vision
and graphics such as image inpainting and video inpainting. In this paper, we
consider a new tensor recovery model, named as minimum -rank approximation
(MnRA), and propose an appropriate iterative hard thresholding algorithm with
giving the upper bound of the -rank in advance. The convergence analysis of
the proposed algorithm is also presented. Particularly, we show that for the
noiseless case, the linear convergence with rate can be obtained
for the proposed algorithm under proper conditions. Additionally, combining an
effective heuristic for determining -rank, we can also apply the proposed
algorithm to solve MnRA when -rank is unknown in advance. Some preliminary
numerical results on randomly generated and real low -rank tensor completion
problems are reported, which show the efficiency of the proposed algorithms.Comment: Iterative hard thresholding; low--rank tensor recovery; tensor
completion; compressed sensin
- …