159,996 research outputs found
Separable Cosparse Analysis Operator Learning
The ability of having a sparse representation for a certain class of signals
has many applications in data analysis, image processing, and other research
fields. Among sparse representations, the cosparse analysis model has recently
gained increasing interest. Many signals exhibit a multidimensional structure,
e.g. images or three-dimensional MRI scans. Most data analysis and learning
algorithms use vectorized signals and thereby do not account for this
underlying structure. The drawback of not taking the inherent structure into
account is a dramatic increase in computational cost. We propose an algorithm
for learning a cosparse Analysis Operator that adheres to the preexisting
structure of the data, and thus allows for a very efficient implementation.
This is achieved by enforcing a separable structure on the learned operator.
Our learning algorithm is able to deal with multidimensional data of arbitrary
order. We evaluate our method on volumetric data at the example of
three-dimensional MRI scans.Comment: 5 pages, 3 figures, accepted at EUSIPCO 201
Decomposition Ascribed Synergistic Learning for Unified Image Restoration
Learning to restore multiple image degradations within a single model is
quite beneficial for real-world applications. Nevertheless, existing works
typically concentrate on regarding each degradation independently, while their
relationship has been less exploited to ensure the synergistic learning. To
this end, we revisit the diverse degradations through the lens of singular
value decomposition, with the observation that the decomposed singular vectors
and singular values naturally undertake the different types of degradation
information, dividing various restoration tasks into two groups,\ie, singular
vector dominated and singular value dominated. The above analysis renders a
more unified perspective to ascribe the diverse degradations, compared to
previous task-level independent learning. The dedicated optimization of
degraded singular vectors and singular values inherently utilizes the potential
relationship among diverse restoration tasks, attributing to the Decomposition
Ascribed Synergistic Learning (DASL). Specifically, DASL comprises two
effective operators, namely, Singular VEctor Operator (SVEO) and Singular VAlue
Operator (SVAO), to favor the decomposed optimization, which can be lightly
integrated into existing convolutional image restoration backbone. Moreover,
the congruous decomposition loss has been devised for auxiliary. Extensive
experiments on blended five image restoration tasks demonstrate the
effectiveness of our method, including image deraining, image dehazing, image
denoising, image deblurring, and low-light image enhancement.Comment: 13 page
Convolutional Dictionary Learning: Acceleration and Convergence
Convolutional dictionary learning (CDL or sparsifying CDL) has many
applications in image processing and computer vision. There has been growing
interest in developing efficient algorithms for CDL, mostly relying on the
augmented Lagrangian (AL) method or the variant alternating direction method of
multipliers (ADMM). When their parameters are properly tuned, AL methods have
shown fast convergence in CDL. However, the parameter tuning process is not
trivial due to its data dependence and, in practice, the convergence of AL
methods depends on the AL parameters for nonconvex CDL problems. To moderate
these problems, this paper proposes a new practically feasible and convergent
Block Proximal Gradient method using a Majorizer (BPG-M) for CDL. The
BPG-M-based CDL is investigated with different block updating schemes and
majorization matrix designs, and further accelerated by incorporating some
momentum coefficient formulas and restarting techniques. All of the methods
investigated incorporate a boundary artifacts removal (or, more generally,
sampling) operator in the learning model. Numerical experiments show that,
without needing any parameter tuning process, the proposed BPG-M approach
converges more stably to desirable solutions of lower objective values than the
existing state-of-the-art ADMM algorithm and its memory-efficient variant do.
Compared to the ADMM approaches, the BPG-M method using a multi-block updating
scheme is particularly useful in single-threaded CDL algorithm handling large
datasets, due to its lower memory requirement and no polynomial computational
complexity. Image denoising experiments show that, for relatively strong
additive white Gaussian noise, the filters learned by BPG-M-based CDL
outperform those trained by the ADMM approach.Comment: 21 pages, 7 figures, submitted to IEEE Transactions on Image
Processin
Analysis Operator Learning and Its Application to Image Reconstruction
Exploiting a priori known structural information lies at the core of many
image reconstruction methods that can be stated as inverse problems. The
synthesis model, which assumes that images can be decomposed into a linear
combination of very few atoms of some dictionary, is now a well established
tool for the design of image reconstruction algorithms. An interesting
alternative is the analysis model, where the signal is multiplied by an
analysis operator and the outcome is assumed to be the sparse. This approach
has only recently gained increasing interest. The quality of reconstruction
methods based on an analysis model severely depends on the right choice of the
suitable operator.
In this work, we present an algorithm for learning an analysis operator from
training images. Our method is based on an -norm minimization on the
set of full rank matrices with normalized columns. We carefully introduce the
employed conjugate gradient method on manifolds, and explain the underlying
geometry of the constraints. Moreover, we compare our approach to
state-of-the-art methods for image denoising, inpainting, and single image
super-resolution. Our numerical results show competitive performance of our
general approach in all presented applications compared to the specialized
state-of-the-art techniques.Comment: 12 pages, 7 figure
A Multi-scale Generalized Shrinkage Threshold Network for Image Blind Deblurring in Remote Sensing
Remote sensing images are essential for many earth science applications, but
their quality can be degraded due to limitations in sensor technology and
complex imaging environments. To address this, various remote sensing image
deblurring methods have been developed to restore sharp, high-quality images
from degraded observational data. However, most traditional model-based
deblurring methods usually require predefined hand-craft prior assumptions,
which are difficult to handle in complex applications, and most deep
learning-based deblurring methods are designed as a black box, lacking
transparency and interpretability. In this work, we propose a novel blind
deblurring learning framework based on alternating iterations of shrinkage
thresholds, alternately updating blurring kernels and images, with the
theoretical foundation of network design. Additionally, we propose a learnable
blur kernel proximal mapping module to improve the blur kernel evaluation in
the kernel domain. Then, we proposed a deep proximal mapping module in the
image domain, which combines a generalized shrinkage threshold operator and a
multi-scale prior feature extraction block. This module also introduces an
attention mechanism to adaptively adjust the prior importance, thus avoiding
the drawbacks of hand-crafted image prior terms. Thus, a novel multi-scale
generalized shrinkage threshold network (MGSTNet) is designed to specifically
focus on learning deep geometric prior features to enhance image restoration.
Experiments demonstrate the superiority of our MGSTNet framework on remote
sensing image datasets compared to existing deblurring methods.Comment: 12 pages
Fitting 3D Morphable Models using Local Features
In this paper, we propose a novel fitting method that uses local image
features to fit a 3D Morphable Model to 2D images. To overcome the obstacle of
optimising a cost function that contains a non-differentiable feature
extraction operator, we use a learning-based cascaded regression method that
learns the gradient direction from data. The method allows to simultaneously
solve for shape and pose parameters. Our method is thoroughly evaluated on
Morphable Model generated data and first results on real data are presented.
Compared to traditional fitting methods, which use simple raw features like
pixel colour or edge maps, local features have been shown to be much more
robust against variations in imaging conditions. Our approach is unique in that
we are the first to use local features to fit a Morphable Model.
Because of the speed of our method, it is applicable for realtime
applications. Our cascaded regression framework is available as an open source
library (https://github.com/patrikhuber).Comment: Submitted to ICIP 2015; 4 pages, 4 figure
- …