915 research outputs found

    Faster gradient descent and the efficient recovery of images

    Full text link
    Much recent attention has been devoted to gradient descent algorithms where the steepest descent step size is replaced by a similar one from a previous iteration or gets updated only once every second step, thus forming a {\em faster gradient descent method}. For unconstrained convex quadratic optimization these methods can converge much faster than steepest descent. But the context of interest here is application to certain ill-posed inverse problems, where the steepest descent method is known to have a smoothing, regularizing effect, and where a strict optimization solution is not necessary. Specifically, in this paper we examine the effect of replacing steepest descent by a faster gradient descent algorithm in the practical context of image deblurring and denoising tasks. We also propose several highly efficient schemes for carrying out these tasks independently of the step size selection, as well as a scheme for the case where both blur and significant noise are present. In the above context there are situations where many steepest descent steps are required, thus building slowness into the solution procedure. Our general conclusion regarding gradient descent methods is that in such cases the faster gradient descent methods offer substantial advantages. In other situations where no such slowness buildup arises the steepest descent method can still be very effective

    On Convergent Finite Difference Schemes for Variational - PDE Based Image Processing

    Full text link
    We study an adaptive anisotropic Huber functional based image restoration scheme. By using a combination of L2-L1 regularization functions, an adaptive Huber functional based energy minimization model provides denoising with edge preservation in noisy digital images. We study a convergent finite difference scheme based on continuous piecewise linear functions and use a variable splitting scheme, namely the Split Bregman, to obtain the discrete minimizer. Experimental results are given in image denoising and comparison with additive operator splitting, dual fixed point, and projected gradient schemes illustrate that the best convergence rates are obtained for our algorithm.Comment: 23 pages, 12 figures, 2 table

    Image Restoration using Total Variation with Overlapping Group Sparsity

    Full text link
    Image restoration is one of the most fundamental issues in imaging science. Total variation (TV) regularization is widely used in image restoration problems for its capability to preserve edges. In the literature, however, it is also well known for producing staircase-like artifacts. Usually, the high-order total variation (HTV) regularizer is an good option except its over-smoothing property. In this work, we study a minimization problem where the objective includes an usual l2l_2 data-fidelity term and an overlapping group sparsity total variation regularizer which can avoid staircase effect and allow edges preserving in the restored image. We also proposed a fast algorithm for solving the corresponding minimization problem and compare our method with the state-of-the-art TV based methods and HTV based method. The numerical experiments illustrate the efficiency and effectiveness of the proposed method in terms of PSNR, relative error and computing time.Comment: 11 pages, 37 figure

    Multiplicative Noise Removal Using Variable Splitting and Constrained Optimization

    Full text link
    Multiplicative noise (also known as speckle noise) models are central to the study of coherent imaging systems, such as synthetic aperture radar and sonar, and ultrasound and laser imaging. These models introduce two additional layers of difficulties with respect to the standard Gaussian additive noise scenario: (1) the noise is multiplied by (rather than added to) the original image; (2) the noise is not Gaussian, with Rayleigh and Gamma being commonly used densities. These two features of multiplicative noise models preclude the direct application of most state-of-the-art algorithms, which are designed for solving unconstrained optimization problems where the objective has two terms: a quadratic data term (log-likelihood), reflecting the additive and Gaussian nature of the noise, plus a convex (possibly nonsmooth) regularizer (e.g., a total variation or wavelet-based regularizer/prior). In this paper, we address these difficulties by: (1) converting the multiplicative model into an additive one by taking logarithms, as proposed by some other authors; (2) using variable splitting to obtain an equivalent constrained problem; and (3) dealing with this optimization problem using the augmented Lagrangian framework. A set of experiments shows that the proposed method, which we name MIDAL (multiplicative image denoising by augmented Lagrangian), yields state-of-the-art results both in terms of speed and denoising performance.Comment: 11 pages, 7 figures, 2 tables. To appear in the IEEE Transactions on Image Processing

    Inexact Bregman iteration with an application to Poisson data reconstruction

    Get PDF
    This work deals with the solution of image restoration problems by an iterative regularization method based on the Bregman iteration. Any iteration of this scheme requires to exactly compute the minimizer of a function. However, in some image reconstruction applications, it is either impossible or extremely expensive to obtain exact solutions of these subproblems. In this paper, we propose an inexact version of the iterative procedure, where the inexactness in the inner subproblem solution is controlled by a criterion that preserves the convergence of the Bregman iteration and its features in image restoration problems. In particular, the method allows to obtain accurate reconstructions also when only an overestimation of the regularization parameter is known. The introduction of the inexactness in the iterative scheme allows to address image reconstruction problems from data corrupted by Poisson noise, exploiting the recent advances about specialized algorithms for the numerical minimization of the generalized Kullbackā€“Leibler divergence combined with a regularization term. The results of several numerical experiments enable to evaluat

    This is SPIRAL-TAP: Sparse Poisson Intensity Reconstruction ALgorithms - Theory and Practice

    Full text link
    The observations in many applications consist of counts of discrete events, such as photons hitting a detector, which cannot be effectively modeled using an additive bounded or Gaussian noise model, and instead require a Poisson noise model. As a result, accurate reconstruction of a spatially or temporally distributed phenomenon (f*) from Poisson data (y) cannot be effectively accomplished by minimizing a conventional penalized least-squares objective function. The problem addressed in this paper is the estimation of f* from y in an inverse problem setting, where (a) the number of unknowns may potentially be larger than the number of observations and (b) f* admits a sparse approximation. The optimization formulation considered in this paper uses a penalized negative Poisson log-likelihood objective function with nonnegativity constraints (since Poisson intensities are naturally nonnegative). In particular, the proposed approach incorporates key ideas of using separable quadratic approximations to the objective function at each iteration and penalization terms related to l1 norms of coefficient vectors, total variation seminorms, and partition-based multiscale estimation methods.Comment: 11 pages, 7 figures, IEEE Transactions on Image Processing (2011), in pres

    Weighted Mean Curvature

    Full text link
    In image processing tasks, spatial priors are essential for robust computations, regularization, algorithmic design and Bayesian inference. In this paper, we introduce weighted mean curvature (WMC) as a novel image prior and present an efficient computation scheme for its discretization in practical image processing applications. We first demonstrate the favorable properties of WMC, such as sampling invariance, scale invariance, and contrast invariance with Gaussian noise model; and we show the relation of WMC to area regularization. We further propose an efficient computation scheme for discretized WMC, which is demonstrated herein to process over 33.2 giga-pixels/second on GPU. This scheme yields itself to a convolutional neural network representation. Finally, WMC is evaluated on synthetic and real images, showing its superiority quantitatively to total-variation and mean curvature.Comment: 12 page

    Compressed Sensing Parallel MRI with Adaptive Shrinkage TV Regularization

    Full text link
    Compressed sensing (CS) methods in magnetic resonance imaging (MRI) offer rapid acquisition and improved image quality but require iterative reconstruction schemes with regularization to enforce sparsity. Regardless of the difficulty in obtaining a fast numerical solution, the total variation (TV) regularization is a preferred choice due to its edge-preserving and structure recovery capabilities. While many approaches have been proposed to overcome the non-differentiability of the TV cost term, an iterative shrinkage based formulation allows recovering an image through recursive application of linear filtering and soft thresholding. However, providing an optimal setting for the regularization parameter is critical due to its direct impact on the rate of convergence as well as steady state error. In this paper, a regularizer adaptively varying in the derivative space is proposed, that follows the generalized discrepancy principle (GDP). The implementation proceeds by adaptively reducing the discrepancy level expressed as the absolute difference between TV norms of the consistency error and the sparse approximation error. A criterion based on the absolute difference between TV norms of consistency and sparse approximation errors is used to update the threshold. Application of the adaptive shrinkage TV regularizer to CS recovery of parallel MRI (pMRI) and temporal gradient adaptation in dynamic MRI are shown to result in improved image quality with accelerated convergence. In addition, the adaptive TV-based iterative shrinkage (ATVIS) provides a significant speed advantage over the fast iterative shrinkage-thresholding algorithm (FISTA).Comment: 27 pages,9 figure

    Local Linear Convergence of the ADMM/Douglas--Rachford Algorithms without Strong Convexity and Application to Statistical Imaging

    Full text link
    We consider the problem of minimizing the sum of a convex function and a convex function composed with an injective linear mapping. For such problems, subject to a coercivity condition at fixed points of the corresponding Picard iteration, iterates of the alternating directions method of multipliers converge locally linearly to points from which the solution to the original problem can be computed. Our proof strategy uses duality and strong metric subregularity of the Douglas--Rachford fixed point mapping. Our analysis does not require strong convexity and yields error bounds to the set of model solutions. We show in particular that convex piecewise linear-quadratic functions naturally satisfy the requirements of the theory, guaranteeing eventual linear convergence of both the Douglas--Rachford algorithm and the alternating directions method of multipliers for this class of objectives under mild assumptions on the set of fixed points. We demonstrate this result on quantitative image deconvolution and denoising with multiresolution statistical constraints.Comment: Revised manuscript: 30 pages including 9 figures, one appendix and 57 references. Difference from version 2: title and abstract changed, one new figure added, and a posteriori error estimates in numerical experiments reporte

    A multilevel based reweighting algorithm with joint regularizers for sparse recovery

    Full text link
    Sparsity is one of the key concepts that allows the recovery of signals that are subsampled at a rate significantly lower than required by the Nyquist-Shannon sampling theorem. Our proposed framework uses arbitrary multiscale transforms, such as those build upon wavelets or shearlets, as a sparsity promoting prior which allow to decompose the image into different scales such that image features can be optimally extracted. In order to further exploit the sparsity of the recovered signal we combine the method of reweighted ā„“1\ell^1, introduced by Cand\`es et al., with iteratively updated weights accounting for the multilevel structure of the signal. This is done by directly incorporating this approach into a split Bregman based algorithmic framework. Furthermore, we add total generalized variation (TGV) as a second regularizer into the split Bregman algorithm. The resulting algorithm is then applied to a classical and widely considered task in signal- and image processing which is the reconstruction of images from their Fourier measurements. Our numerical experiments show a highly improved performance at relatively low computational costs compared to many other well established methods and strongly suggest that sparsity is better exploited by our method
    • ā€¦
    corecore