86 research outputs found
Structure tensor total variation
This is the final version of the article. Available from Society for Industrial and Applied Mathematics via the DOI in this record.We introduce a novel generic energy functional that we employ to solve inverse imaging problems
within a variational framework. The proposed regularization family, termed as structure tensor
total variation (STV), penalizes the eigenvalues of the structure tensor and is suitable for both
grayscale and vector-valued images. It generalizes several existing variational penalties, including
the total variation seminorm and vectorial extensions of it. Meanwhile, thanks to the structure
tensor’s ability to capture first-order information around a local neighborhood, the STV functionals
can provide more robust measures of image variation. Further, we prove that the STV regularizers
are convex while they also satisfy several invariance properties w.r.t. image transformations. These
properties qualify them as ideal candidates for imaging applications. In addition, for the discrete
version of the STV functionals we derive an equivalent definition that is based on the patch-based
Jacobian operator, a novel linear operator which extends the Jacobian matrix. This alternative
definition allow us to derive a dual problem formulation. The duality of the problem paves the
way for employing robust tools from convex optimization and enables us to design an efficient
and parallelizable optimization algorithm. Finally, we present extensive experiments on various
inverse imaging problems, where we compare our regularizers with other competing regularization
approaches. Our results are shown to be systematically superior, both quantitatively and visually
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
A combined first and second order variational approach for image reconstruction
In this paper we study a variational problem in the space of functions of
bounded Hessian. Our model constitutes a straightforward higher-order extension
of the well known ROF functional (total variation minimisation) to which we add
a non-smooth second order regulariser. It combines convex functions of the
total variation and the total variation of the first derivatives. In what
follows, we prove existence and uniqueness of minimisers of the combined model
and present the numerical solution of the corresponding discretised problem by
employing the split Bregman method. The paper is furnished with applications of
our model to image denoising, deblurring as well as image inpainting. The
obtained numerical results are compared with results obtained from total
generalised variation (TGV), infimal convolution and Euler's elastica, three
other state of the art higher-order models. The numerical discussion confirms
that the proposed higher-order model competes with models of its kind in
avoiding the creation of undesirable artifacts and blocky-like structures in
the reconstructed images -- a known disadvantage of the ROF model -- while
being simple and efficiently numerically solvable.Comment: 34 pages, 89 figure
Generating structured non-smooth priors and associated primal-dual methods
The purpose of the present chapter is to bind together and extend some recent developments regarding data-driven non-smooth regularization techniques in image processing through the means of a bilevel minimization scheme. The scheme, considered in function space, takes advantage of a dualization framework and it is designed to produce spatially varying regularization parameters adapted to the data for well-known regularizers, e.g. Total Variation and Total Generalized variation, leading to automated (monolithic), image reconstruction workflows. An inclusion of the theory of bilevel optimization and the theoretical background of the dualization framework, as well as a brief review of the aforementioned regularizers and their parameterization, makes this chapter a self-contained one. Aspects of the numerical implementation of the scheme are discussed and numerical examples are provided
Image decomposition using a second-order variational model and wavelet shrinkage
The paper is devoted to the new model for image decomposition, that splits an image f into three components u+v+w, where u a piecewise-smooth or the "cartoon" component, v a texture component and w the noise part in variational approach. This decomposition model in fact incorporates the advantages of two preceding models: the second-order total variation minimization of Rudin-Osher-Fatemi (ROF2), and wavelet shrinkage for oscillatory functions. This decomposition model is presented as an extension of the three components decomposition algorithm of Aujol et al. in [Dual norms and image decomposition models, Int. J. Comput. Vis. 63(1)(2005) 85-104]. It also continues the idea introduced previously by authors in [Denoising 3D medical images using a second order variational model and wavelet shrinkage, Imag. Anal. and Rec., Lecture Notes in Computer Science, 7325(2012), 138-145], for two components decomposition model. The ROF2 model was first proposed by Bergounioux et al. in [A second-order model for image denoising, Set-Valued Anal. and Var. Anal., 18(2010), 277-306], it is an improved regularization method to overcome the undesirable staircasing effect. The wavelet shrinkage is well combined to separate the oscillating part due to texture from that due to noise. Experimental results validate the proposed algorithm and demonstrate that the image decomposition model presents effective and comparable performance to other previous models
Image decomposition using a second-order variational model and wavelet shrinkage
The paper is devoted to the new model for image decomposition, that splits an image into three components , with a piecewise-smooth or the ``cartoon'' component, a texture component and the noise part in variational approach. This decomposition model is in fact incorporates the advantages of two preceding models: the second-order total variation minimization of Rudin-Osher-Fatemi (ROF2), and wavelet shrinkage for oscillatory functions. This decomposition model is presented as an extension of the three components decomposition algorithm of Aujol et al. in \cite{JAC}. It also continues the idea introduced previously by authors in \cite{TPB}, for two components decomposition model. The ROF2 model was first proposed by Bergounioux et al. in \cite{BP}, it is an improved regularization method to overcome the undesirable staircasing effect. The wavelet shrinkage is well combined to separate the oscillating part due to texture from that due to noise. Experimental results validate the proposed algorithm and demonstrate that the image decomposition model presents effective and comparable performance to other state-of-the-art models
- …