8 research outputs found
A flexible space-variant anisotropic regularisation for image restoration with automated parameter selection
We propose a new space-variant anisotropic regularisation term for
variational image restoration, based on the statistical assumption that the
gradients of the target image distribute locally according to a bivariate
generalised Gaussian distribution. The highly flexible variational structure of
the corresponding regulariser encodes several free parameters which hold the
potential for faithfully modelling the local geometry in the image and
describing local orientation preferences. For an automatic estimation of such
parameters, we design a robust maximum likelihood approach and report results
on its reliability on synthetic data and natural images. For the numerical
solution of the corresponding image restoration model, we use an iterative
algorithm based on the Alternating Direction Method of Multipliers (ADMM). A
suitable preliminary variable splitting together with a novel result in
multivariate non-convex proximal calculus yield a very efficient minimisation
algorithm. Several numerical results showing significant quality-improvement of
the proposed model with respect to some related state-of-the-art competitors
are reported, in particular in terms of texture and detail preservation
Analysis and optimisation of a variational model for mixed Gaussian and Salt & Pepper noise removal
We analyse a variational regularisation problem for mixed noise removal that was recently proposed in [14]. The data discrepancy term of the model combines L1 and L2 terms in an infimal convolution fashion and it is appropriate for the joint removal of Gaussian and Salt & Pepper noise. In this work we perform a finer analysis of the model which emphasises on the balancing effect of the two parameters appearing in the discrepancy term. Namely, we study the asymptotic behaviour of the model for large and small values of these parameters and we compare it to the corresponding variational models with L1 and L2 data fidelity. Furthermore, we compute exact solutions for simple data functions taking the total variation as regulariser. Using these theoretical results, we then analytically study a bilevel optimisation strategy for automatically selecting the parameters of the model by means of a training set. Finally, we report some numerical results on the selection of the optimal noise model via such strategy which confirm the validity of our analysis and the use of popular data models in the case of "blind'' model selection
Dualization and automatic distributed parameter selection of total generalized variation via bilevel optimization
Total Generalized Variation (TGV) regularization in image reconstruction relies on an infimal convolution type combination of generalized first- and second-order derivatives. This helps to avoid the staircasing effect of Total Variation (TV) regularization, while still preserving sharp contrasts in images. The associated regularization effect crucially hinges on two parameters whose proper adjustment represents a challenging task. In this work, a bilevel optimization framework with a suitable statistics-based upper level objective is proposed in order to automatically select these parameters. The framework allows for spatially varying parameters, thus enabling better recovery in high-detail image areas. A rigorous dualization framework is established, and for the numerical solution, two Newton type methods for the solution of the lower level problem, i.e. the image reconstruction problem, and two bilevel TGV algorithms are introduced, respectively. Denoising tests confirm that automatically selected distributed regularization parameters lead in general to improved reconstructions when compared to results for scalar parameters
Generating structured non-smooth priors and associated primal-dual methods
The purpose of the present chapter is to bind together and extend some recent developments regarding data-driven non-smooth regularization techniques in image processing through the means of a bilevel minimization scheme. The scheme, considered in function space, takes advantage of a dualization framework and it is designed to produce spatially varying regularization parameters adapted to the data for well-known regularizers, e.g. Total Variation and Total Generalized variation, leading to automated (monolithic), image reconstruction workflows. An inclusion of the theory of bilevel optimization and the theoretical background of the dualization framework, as well as a brief review of the aforementioned regularizers and their parameterization, makes this chapter a self-contained one. Aspects of the numerical implementation of the scheme are discussed and numerical examples are provided
Dualization and Automatic Distributed Parameter Selection of Total Generalized Variation via Bilevel Optimization
Total Generalized Variation (TGV) regularization in image reconstruction
relies on an infimal convolution type combination of generalized first- and
second-order derivatives. This helps to avoid the staircasing effect of Total
Variation (TV) regularization, while still preserving sharp contrasts in
images. The associated regularization effect crucially hinges on two parameters
whose proper adjustment represents a challenging task. In this work, a bilevel
optimization framework with a suitable statistics-based upper level objective
is proposed in order to automatically select these parameters. The framework
allows for spatially varying parameters, thus enabling better recovery in
high-detail image areas. A rigorous dualization framework is established, and
for the numerical solution, two Newton type methods for the solution of the
lower level problem, i.e. the image reconstruction problem, and two bilevel TGV
algorithms are introduced, respectively. Denoising tests confirm that
automatically selected distributed regularization parameters lead in general to
improved reconstructions when compared to results for scalar parameters