53,303 research outputs found

    Variational image restoration by means of wavelets: Simultaneous decomposition, deblurring, and denoising

    Get PDF
    AbstractInspired by papers of Vese–Osher [Modeling textures with total variation minimization and oscillating patterns in image processing, Technical Report 02-19, 2002] and Osher–Solé–Vese [Image decomposition and restoration using total variation minimization and the H−1 norm, Technical Report 02-57, 2002] we present a wavelet-based treatment of variational problems arising in the field of image processing. In particular, we follow their approach and discuss a special class of variational functionals that induce a decomposition of images into oscillating and cartoon components and possibly an appropriate ‘noise’ component. In the setting of [Modeling textures with total variation minimization and oscillating patterns in image processing, Technical Report 02-19, 2002] and [Image decomposition and restoration using total variation minimization and the H−1 norm, Technical Report 02-57, 2002], the cartoon component of an image is modeled by a BV function; the corresponding incorporation of BV penalty terms in the variational functional leads to PDE schemes that are numerically intensive. By replacing the BV penalty term by a B11(L1) term (which amounts to a slightly stronger constraint on the minimizer), and writing the problem in a wavelet framework, we obtain elegant and numerically efficient schemes with results very similar to those obtained in [Modeling textures with total variation minimization and oscillating patterns in image processing, Technical Report 02-19, 2002] and [Image decomposition and restoration using total variation minimization and the H−1 norm, Technical Report 02-57, 2002]. This approach allows us, moreover, to incorporate general bounded linear blur operators into the problem so that the minimization leads to a simultaneous decomposition, deblurring and denoising

    Shape Calculus for Shape Energies in Image Processing

    Full text link
    Many image processing problems are naturally expressed as energy minimization or shape optimization problems, in which the free variable is a shape, such as a curve in 2d or a surface in 3d. Examples are image segmentation, multiview stereo reconstruction, geometric interpolation from data point clouds. To obtain the solution of such a problem, one usually resorts to an iterative approach, a gradient descent algorithm, which updates a candidate shape gradually deforming it into the optimal shape. Computing the gradient descent updates requires the knowledge of the first variation of the shape energy, or rather the first shape derivative. In addition to the first shape derivative, one can also utilize the second shape derivative and develop a Newton-type method with faster convergence. Unfortunately, the knowledge of shape derivatives for shape energies in image processing is patchy. The second shape derivatives are known for only two of the energies in the image processing literature and many results for the first shape derivative are limiting, in the sense that they are either for curves on planes, or developed for a specific representation of the shape or for a very specific functional form in the shape energy. In this work, these limitations are overcome and the first and second shape derivatives are computed for large classes of shape energies that are representative of the energies found in image processing. Many of the formulas we obtain are new and some generalize previous existing results. These results are valid for general surfaces in any number of dimensions. This work is intended to serve as a cookbook for researchers who deal with shape energies for various applications in image processing and need to develop algorithms to compute the shapes minimizing these energies

    A variable metric forward--backward method with extrapolation

    Full text link
    Forward-backward methods are a very useful tool for the minimization of a functional given by the sum of a differentiable term and a nondifferentiable one and their investigation has experienced several efforts from many researchers in the last decade. In this paper we focus on the convex case and, inspired by recent approaches for accelerating first-order iterative schemes, we develop a scaled inertial forward-backward algorithm which is based on a metric changing at each iteration and on a suitable extrapolation step. Unlike standard forward-backward methods with extrapolation, our scheme is able to handle functions whose domain is not the entire space. Both {an O(1/k2){\mathcal O}(1/k^2) convergence rate estimate on the objective function values and the convergence of the sequence of the iterates} are proved. Numerical experiments on several {test problems arising from image processing, compressed sensing and statistical inference} show the {effectiveness} of the proposed method in comparison to well performing {state-of-the-art} algorithms

    Jump-sparse and sparse recovery using Potts functionals

    Full text link
    We recover jump-sparse and sparse signals from blurred incomplete data corrupted by (possibly non-Gaussian) noise using inverse Potts energy functionals. We obtain analytical results (existence of minimizers, complexity) on inverse Potts functionals and provide relations to sparsity problems. We then propose a new optimization method for these functionals which is based on dynamic programming and the alternating direction method of multipliers (ADMM). A series of experiments shows that the proposed method yields very satisfactory jump-sparse and sparse reconstructions, respectively. We highlight the capability of the method by comparing it with classical and recent approaches such as TV minimization (jump-sparse signals), orthogonal matching pursuit, iterative hard thresholding, and iteratively reweighted â„“1\ell^1 minimization (sparse signals)

    Multiclass Data Segmentation using Diffuse Interface Methods on Graphs

    Full text link
    We present two graph-based algorithms for multiclass segmentation of high-dimensional data. The algorithms use a diffuse interface model based on the Ginzburg-Landau functional, related to total variation compressed sensing and image processing. A multiclass extension is introduced using the Gibbs simplex, with the functional's double-well potential modified to handle the multiclass case. The first algorithm minimizes the functional using a convex splitting numerical scheme. The second algorithm is a uses a graph adaptation of the classical numerical Merriman-Bence-Osher (MBO) scheme, which alternates between diffusion and thresholding. We demonstrate the performance of both algorithms experimentally on synthetic data, grayscale and color images, and several benchmark data sets such as MNIST, COIL and WebKB. We also make use of fast numerical solvers for finding the eigenvectors and eigenvalues of the graph Laplacian, and take advantage of the sparsity of the matrix. Experiments indicate that the results are competitive with or better than the current state-of-the-art multiclass segmentation algorithms.Comment: 14 page
    • …
    corecore