525 research outputs found
Image Deblurring and Super-resolution by Adaptive Sparse Domain Selection and Adaptive Regularization
As a powerful statistical image modeling technique, sparse representation has
been successfully used in various image restoration applications. The success
of sparse representation owes to the development of l1-norm optimization
techniques, and the fact that natural images are intrinsically sparse in some
domain. The image restoration quality largely depends on whether the employed
sparse domain can represent well the underlying image. Considering that the
contents can vary significantly across different images or different patches in
a single image, we propose to learn various sets of bases from a pre-collected
dataset of example image patches, and then for a given patch to be processed,
one set of bases are adaptively selected to characterize the local sparse
domain. We further introduce two adaptive regularization terms into the sparse
representation framework. First, a set of autoregressive (AR) models are
learned from the dataset of example image patches. The best fitted AR models to
a given patch are adaptively selected to regularize the image local structures.
Second, the image non-local self-similarity is introduced as another
regularization term. In addition, the sparsity regularization parameter is
adaptively estimated for better image restoration performance. Extensive
experiments on image deblurring and super-resolution validate that by using
adaptive sparse domain selection and adaptive regularization, the proposed
method achieves much better results than many state-of-the-art algorithms in
terms of both PSNR and visual perception.Comment: 35 pages. This paper is under review in IEEE TI
How Does the Low-Rank Matrix Decomposition Help Internal and External Learnings for Super-Resolution
Wisely utilizing the internal and external learning methods is a new
challenge in super-resolution problem. To address this issue, we analyze the
attributes of two methodologies and find two observations of their recovered
details: 1) they are complementary in both feature space and image plane, 2)
they distribute sparsely in the spatial space. These inspire us to propose a
low-rank solution which effectively integrates two learning methods and then
achieves a superior result. To fit this solution, the internal learning method
and the external learning method are tailored to produce multiple preliminary
results. Our theoretical analysis and experiment prove that the proposed
low-rank solution does not require massive inputs to guarantee the performance,
and thereby simplifying the design of two learning methods for the solution.
Intensive experiments show the proposed solution improves the single learning
method in both qualitative and quantitative assessments. Surprisingly, it shows
more superior capability on noisy images and outperforms state-of-the-art
methods
Convolutional Dictionary Learning: Acceleration and Convergence
Convolutional dictionary learning (CDL or sparsifying CDL) has many
applications in image processing and computer vision. There has been growing
interest in developing efficient algorithms for CDL, mostly relying on the
augmented Lagrangian (AL) method or the variant alternating direction method of
multipliers (ADMM). When their parameters are properly tuned, AL methods have
shown fast convergence in CDL. However, the parameter tuning process is not
trivial due to its data dependence and, in practice, the convergence of AL
methods depends on the AL parameters for nonconvex CDL problems. To moderate
these problems, this paper proposes a new practically feasible and convergent
Block Proximal Gradient method using a Majorizer (BPG-M) for CDL. The
BPG-M-based CDL is investigated with different block updating schemes and
majorization matrix designs, and further accelerated by incorporating some
momentum coefficient formulas and restarting techniques. All of the methods
investigated incorporate a boundary artifacts removal (or, more generally,
sampling) operator in the learning model. Numerical experiments show that,
without needing any parameter tuning process, the proposed BPG-M approach
converges more stably to desirable solutions of lower objective values than the
existing state-of-the-art ADMM algorithm and its memory-efficient variant do.
Compared to the ADMM approaches, the BPG-M method using a multi-block updating
scheme is particularly useful in single-threaded CDL algorithm handling large
datasets, due to its lower memory requirement and no polynomial computational
complexity. Image denoising experiments show that, for relatively strong
additive white Gaussian noise, the filters learned by BPG-M-based CDL
outperform those trained by the ADMM approach.Comment: 21 pages, 7 figures, submitted to IEEE Transactions on Image
Processin
Fast Separable Non-Local Means
We propose a simple and fast algorithm called PatchLift for computing
distances between patches (contiguous block of samples) extracted from a given
one-dimensional signal. PatchLift is based on the observation that the patch
distances can be efficiently computed from a matrix that is derived from the
one-dimensional signal using lifting; importantly, the number of operations
required to compute the patch distances using this approach does not scale with
the patch length. We next demonstrate how PatchLift can be used for patch-based
denoising of images corrupted with Gaussian noise. In particular, we propose a
separable formulation of the classical Non-Local Means (NLM) algorithm that can
be implemented using PatchLift. We demonstrate that the PatchLift-based
implementation of separable NLM is few orders faster than standard NLM, and is
competitive with existing fast implementations of NLM. Moreover, its denoising
performance is shown to be consistently superior to that of NLM and some of its
variants, both in terms of PSNR/SSIM and visual quality
Flexible Multi-layer Sparse Approximations of Matrices and Applications
The computational cost of many signal processing and machine learning
techniques is often dominated by the cost of applying certain linear operators
to high-dimensional vectors. This paper introduces an algorithm aimed at
reducing the complexity of applying linear operators in high dimension by
approximately factorizing the corresponding matrix into few sparse factors. The
approach relies on recent advances in non-convex optimization. It is first
explained and analyzed in details and then demonstrated experimentally on
various problems including dictionary learning for image denoising, and the
approximation of large matrices arising in inverse problems
- âŠ