97 research outputs found
Block-proximal methods with spatially adapted acceleration
We study and develop (stochastic) primal--dual block-coordinate descent
methods for convex problems based on the method due to Chambolle and Pock. Our
methods have known convergence rates for the iterates and the ergodic gap:
if each block is strongly convex, if no convexity is
present, and more generally a mixed rate for strongly convex
blocks, if only some blocks are strongly convex. Additional novelties of our
methods include blockwise-adapted step lengths and acceleration, as well as the
ability to update both the primal and dual variables randomly in blocks under a
very light compatibility condition. In other words, these variants of our
methods are doubly-stochastic. We test the proposed methods on various image
processing problems, where we employ pixelwise-adapted acceleration
The jump set under geometric regularisation. Part 1: Basic technique and first-order denoising
Let u \in \mbox{BV}(\Omega) solve the total variation denoising problem
with -squared fidelity and data . Caselles et al. [Multiscale Model.
Simul. 6 (2008), 879--894] have shown the containment of the jump set of in that of . Their proof
unfortunately depends heavily on the co-area formula, as do many results in
this area, and as such is not directly extensible to higher-order,
curvature-based, and other advanced geometric regularisers, such as total
generalised variation (TGV) and Euler's elastica. These have received increased
attention in recent times due to their better practical regularisation
properties compared to conventional total variation or wavelets. We prove
analogous jump set containment properties for a general class of regularisers.
We do this with novel Lipschitz transformation techniques, and do not require
the co-area formula. In the present Part 1 we demonstrate the general technique
on first-order regularisers, while in Part 2 we will extend it to higher-order
regularisers. In particular, we concentrate in this part on TV and, as a
novelty, Huber-regularised TV. We also demonstrate that the technique would
apply to non-convex TV models as well as the Perona-Malik anisotropic
diffusion, if these approaches were well-posed to begin with
Primal-dual extragradient methods for nonlinear nonsmooth PDE-constrained optimization
We study the extension of the Chambolle--Pock primal-dual algorithm to
nonsmooth optimization problems involving nonlinear operators between function
spaces. Local convergence is shown under technical conditions including metric
regularity of the corresponding primal-dual optimality conditions. We also show
convergence for a Nesterov-type accelerated variant provided one part of the
functional is strongly convex.
We show the applicability of the accelerated algorithm to examples of inverse
problems with - and -fitting terms as well as of
state-constrained optimal control problems, where convergence can be guaranteed
after introducing an (arbitrary small, still nonsmooth) Moreau--Yosida
regularization. This is verified in numerical examples
Acceleration of the PDHGM on strongly convex subspaces
We propose several variants of the primal-dual method due to Chambolle and
Pock. Without requiring full strong convexity of the objective functions, our
methods are accelerated on subspaces with strong convexity. This yields mixed
rates, with respect to initialisation and with respect to
the dual sequence, and the residual part of the primal sequence. We demonstrate
the efficacy of the proposed methods on image processing problems lacking
strong convexity, such as total generalised variation denoising and total
variation deblurring
Asymptotic behaviour of total generalised variation
The recently introduced second order total generalised variation functional
has been a successful regulariser for image
processing purposes. Its definition involves two positive parameters
and whose values determine the amount and the quality of the
regularisation. In this paper we report on the behaviour of
in the cases where the parameters as well as their ratio becomes very large or very small.
Among others, we prove that for sufficiently symmetric two dimensional data and
large ratio , regularisation
coincides with total variation () regularisation
Interior-proximal primal-dual methods
We study preconditioned proximal point methods for a class of saddle point problems, where the preconditioner decouples the overall proximal point method into an alternating primal--dual method. This is akin to the Chambolle--Pock method or the ADMM. In our work, we replace the squared distance in the dual step by a barrier function on a symmetric cone, while using a standard (Euclidean) proximal step for the primal variable. We show that under non-degeneracy and simple linear constraints, such a hybrid primal--dual algorithm can achieve linear convergence on originally strongly convex problems involving the second-order cone in their saddle point form. On general symmetric cones, we are only able to show an rate. These results are based on estimates of strong convexity of the barrier function, extended with a penalty to the boundary of the symmetric cone
- …