97 research outputs found

    Block-proximal methods with spatially adapted acceleration

    Full text link
    We study and develop (stochastic) primal--dual block-coordinate descent methods for convex problems based on the method due to Chambolle and Pock. Our methods have known convergence rates for the iterates and the ergodic gap: O(1/N2)O(1/N^2) if each block is strongly convex, O(1/N)O(1/N) if no convexity is present, and more generally a mixed rate O(1/N2)+O(1/N)O(1/N^2)+O(1/N) for strongly convex blocks, if only some blocks are strongly convex. Additional novelties of our methods include blockwise-adapted step lengths and acceleration, as well as the ability to update both the primal and dual variables randomly in blocks under a very light compatibility condition. In other words, these variants of our methods are doubly-stochastic. We test the proposed methods on various image processing problems, where we employ pixelwise-adapted acceleration

    The jump set under geometric regularisation. Part 1: Basic technique and first-order denoising

    Full text link
    Let u \in \mbox{BV}(\Omega) solve the total variation denoising problem with L2L^2-squared fidelity and data ff. Caselles et al. [Multiscale Model. Simul. 6 (2008), 879--894] have shown the containment Hm−1(Ju∖Jf)=0\mathcal{H}^{m-1}(J_u \setminus J_f)=0 of the jump set JuJ_u of uu in that of ff. Their proof unfortunately depends heavily on the co-area formula, as do many results in this area, and as such is not directly extensible to higher-order, curvature-based, and other advanced geometric regularisers, such as total generalised variation (TGV) and Euler's elastica. These have received increased attention in recent times due to their better practical regularisation properties compared to conventional total variation or wavelets. We prove analogous jump set containment properties for a general class of regularisers. We do this with novel Lipschitz transformation techniques, and do not require the co-area formula. In the present Part 1 we demonstrate the general technique on first-order regularisers, while in Part 2 we will extend it to higher-order regularisers. In particular, we concentrate in this part on TV and, as a novelty, Huber-regularised TV. We also demonstrate that the technique would apply to non-convex TV models as well as the Perona-Malik anisotropic diffusion, if these approaches were well-posed to begin with

    Primal-dual extragradient methods for nonlinear nonsmooth PDE-constrained optimization

    Get PDF
    We study the extension of the Chambolle--Pock primal-dual algorithm to nonsmooth optimization problems involving nonlinear operators between function spaces. Local convergence is shown under technical conditions including metric regularity of the corresponding primal-dual optimality conditions. We also show convergence for a Nesterov-type accelerated variant provided one part of the functional is strongly convex. We show the applicability of the accelerated algorithm to examples of inverse problems with L1L^1- and L∞L^\infty-fitting terms as well as of state-constrained optimal control problems, where convergence can be guaranteed after introducing an (arbitrary small, still nonsmooth) Moreau--Yosida regularization. This is verified in numerical examples

    Acceleration of the PDHGM on strongly convex subspaces

    Get PDF
    We propose several variants of the primal-dual method due to Chambolle and Pock. Without requiring full strong convexity of the objective functions, our methods are accelerated on subspaces with strong convexity. This yields mixed rates, O(1/N2)O(1/N^2) with respect to initialisation and O(1/N)O(1/N) with respect to the dual sequence, and the residual part of the primal sequence. We demonstrate the efficacy of the proposed methods on image processing problems lacking strong convexity, such as total generalised variation denoising and total variation deblurring

    Asymptotic behaviour of total generalised variation

    Full text link
    The recently introduced second order total generalised variation functional TGVβ,α2\mathrm{TGV}_{\beta,\alpha}^{2} has been a successful regulariser for image processing purposes. Its definition involves two positive parameters α\alpha and β\beta whose values determine the amount and the quality of the regularisation. In this paper we report on the behaviour of TGVβ,α2\mathrm{TGV}_{\beta,\alpha}^{2} in the cases where the parameters α,β\alpha, \beta as well as their ratio β/α\beta/\alpha becomes very large or very small. Among others, we prove that for sufficiently symmetric two dimensional data and large ratio β/α\beta/\alpha, TGVβ,α2\mathrm{TGV}_{\beta,\alpha}^{2} regularisation coincides with total variation (TV\mathrm{TV}) regularisation

    Interior-proximal primal-dual methods

    Get PDF
    We study preconditioned proximal point methods for a class of saddle point problems, where the preconditioner decouples the overall proximal point method into an alternating primal--dual method. This is akin to the Chambolle--Pock method or the ADMM. In our work, we replace the squared distance in the dual step by a barrier function on a symmetric cone, while using a standard (Euclidean) proximal step for the primal variable. We show that under non-degeneracy and simple linear constraints, such a hybrid primal--dual algorithm can achieve linear convergence on originally strongly convex problems involving the second-order cone in their saddle point form. On general symmetric cones, we are only able to show an O(1/N)O(1/N) rate. These results are based on estimates of strong convexity of the barrier function, extended with a penalty to the boundary of the symmetric cone
    • …
    corecore