88 research outputs found
Asymptotic behaviour of total generalised variation
The recently introduced second order total generalised variation functional
has been a successful regulariser for image
processing purposes. Its definition involves two positive parameters
and whose values determine the amount and the quality of the
regularisation. In this paper we report on the behaviour of
in the cases where the parameters as well as their ratio becomes very large or very small.
Among others, we prove that for sufficiently symmetric two dimensional data and
large ratio , regularisation
coincides with total variation () regularisation
A function space framework for structural total variation regularization with applications in inverse problems
In this work, we introduce a function space setting for a wide class of
structural/weighted total variation (TV) regularization methods motivated by
their applications in inverse problems. In particular, we consider a
regularizer that is the appropriate lower semi-continuous envelope (relaxation)
of a suitable total variation type functional initially defined for
sufficiently smooth functions. We study examples where this relaxation can be
expressed explicitly, and we also provide refinements for weighted total
variation for a wide range of weights. Since an integral characterization of
the relaxation in function space is, in general, not always available, we show
that, for a rather general linear inverse problems setting, instead of the
classical Tikhonov regularization problem, one can equivalently solve a
saddle-point problem where no a priori knowledge of an explicit formulation of
the structural TV functional is needed. In particular, motivated by concrete
applications, we deduce corresponding results for linear inverse problems with
norm and Poisson log-likelihood data discrepancy terms. Finally, we provide
proof-of-concept numerical examples where we solve the saddle-point problem for
weighted TV denoising as well as for MR guided PET image reconstruction
Recommended from our members
Analysis and optimisation of a variational model for mixed Gaussian and Salt & Pepper noise removal
We analyse a variational regularisation problem for mixed noise removal
that was recently proposed in [14]. The data discrepancy term of the model
combines L1 and L2 terms in an infimal convolution fashion and it is
appropriate for the joint removal of Gaussian and Salt & Pepper noise. In
this work we perform a finer analysis of the model which emphasises on the
balancing effect of the two parameters appearing in the discrepancy term.
Namely, we study the asymptotic behaviour of the model for large and small
values of these parameters and we compare it to the corresponding variational
models with L1 and L2 data fidelity. Furthermore, we compute exact solutions
for simple data functions taking the total variation as regulariser. Using
these theoretical results, we then analytically study a bilevel optimisation
strategy for automatically selecting the parameters of the model by means of
a training set. Finally, we report some numerical results on the selection of
the optimal noise model via such strategy which confirm the validity of our
analysis and the use of popular data models in the case of blind model
selection
Recommended from our members
A function space framework for structural total variation regularization with applications in inverse problems
In this work, we introduce a function space setting for a wide class of
structural/weighted total variation (TV) regularization methods motivated by
their applications in inverse problems. In particular, we consider a
regularizer that is the appropriate lower semi-continuous envelope
(relaxation) of a suitable total variation type functional initially defined
for sufficiently smooth functions. We study examples where this relaxation
can be expressed explicitly, and we also provide refinements for weighted
total variation for a wide range of weights. Since an integral
characterization of the relaxation in function space is, in general, not
always available, we show that, for a rather general linear inverse problems
setting, instead of the classical Tikhonov regularization problem, one can
equivalently solve a saddle-point problem where no a priori knowledge of an
explicit formulation of the structural TV functional is needed. In
particular, motivated by concrete applications, we deduce corresponding
results for linear inverse problems with norm and Poisson log-likelihood data
discrepancy terms. Finally, we provide proof-of-concept numerical examples
where we solve the saddle-point problem for weighted TV denoising as well as
for MR guided PET image reconstruction
Analysis and optimisation of a variational model for mixed Gaussian and Salt & Pepper noise removal
We analyse a variational regularisation problem for mixed noise removal that was recently proposed in [14]. The data discrepancy term of the model combines L1 and L2 terms in an infimal convolution fashion and it is appropriate for the joint removal of Gaussian and Salt & Pepper noise. In this work we perform a finer analysis of the model which emphasises on the balancing effect of the two parameters appearing in the discrepancy term. Namely, we study the asymptotic behaviour of the model for large and small values of these parameters and we compare it to the corresponding variational models with L1 and L2 data fidelity. Furthermore, we compute exact solutions for simple data functions taking the total variation as regulariser. Using these theoretical results, we then analytically study a bilevel optimisation strategy for automatically selecting the parameters of the model by means of a training set. Finally, we report some numerical results on the selection of the optimal noise model via such strategy which confirm the validity of our analysis and the use of popular data models in the case of "blind'' model selection
Analytical aspects of spatially adapted total variation regularisation
In this paper we study the structure of solutions of the one dimensional weighted total variation regularisation problem, motivated by its application in signal recovery tasks. We study in depth the relationship between the weight function and the creation of new discontinuities in the solution. A partial semigroup property relating the weight function and the solution is shown and analytic solutions for simply data functions are computed. We prove that the weighted total variation minimisation problem is well-posed even in the case of vanishing weight function, despite the lack of coercivity. This is based on the fact that the total variation of the solution is bounded by the total variation of the data, a result that it also shown here. Finally the relationship to the corresponding weighted fidelity problem is explored, showing that the two problems can produce completely different solutions even for very simple data functions
- …