4,071 research outputs found

    Accelerated graph-based nonlinear denoising filters

    Get PDF
    Denoising filters, such as bilateral, guided, and total variation filters, applied to images on general graphs may require repeated application if noise is not small enough. We formulate two acceleration techniques of the resulted iterations: conjugate gradient method and Nesterov's acceleration. We numerically show efficiency of the accelerated nonlinear filters for image denoising and demonstrate 2-12 times speed-up, i.e., the acceleration techniques reduce the number of iterations required to reach a given peak signal-to-noise ratio (PSNR) by the above indicated factor of 2-12.Comment: 10 pages, 6 figures, to appear in Procedia Computer Science, vol.80, 2016, International Conference on Computational Science, San Diego, CA, USA, June 6-8, 201

    Acceleration of the PDHGM on strongly convex subspaces

    Get PDF
    We propose several variants of the primal-dual method due to Chambolle and Pock. Without requiring full strong convexity of the objective functions, our methods are accelerated on subspaces with strong convexity. This yields mixed rates, O(1/N2)O(1/N^2) with respect to initialisation and O(1/N)O(1/N) with respect to the dual sequence, and the residual part of the primal sequence. We demonstrate the efficacy of the proposed methods on image processing problems lacking strong convexity, such as total generalised variation denoising and total variation deblurring

    Variational image regularization with Euler's elastica using a discrete gradient scheme

    Full text link
    This paper concerns an optimization algorithm for unconstrained non-convex problems where the objective function has sparse connections between the unknowns. The algorithm is based on applying a dissipation preserving numerical integrator, the Itoh--Abe discrete gradient scheme, to the gradient flow of an objective function, guaranteeing energy decrease regardless of step size. We introduce the algorithm, prove a convergence rate estimate for non-convex problems with Lipschitz continuous gradients, and show an improved convergence rate if the objective function has sparse connections between unknowns. The algorithm is presented in serial and parallel versions. Numerical tests show its use in Euler's elastica regularized imaging problems and its convergence rate and compare the execution time of the method to that of the iPiano algorithm and the gradient descent and Heavy-ball algorithms
    corecore