223 research outputs found

    Multiplicative Noise Removal Using Variable Splitting and Constrained Optimization

    Full text link
    Multiplicative noise (also known as speckle noise) models are central to the study of coherent imaging systems, such as synthetic aperture radar and sonar, and ultrasound and laser imaging. These models introduce two additional layers of difficulties with respect to the standard Gaussian additive noise scenario: (1) the noise is multiplied by (rather than added to) the original image; (2) the noise is not Gaussian, with Rayleigh and Gamma being commonly used densities. These two features of multiplicative noise models preclude the direct application of most state-of-the-art algorithms, which are designed for solving unconstrained optimization problems where the objective has two terms: a quadratic data term (log-likelihood), reflecting the additive and Gaussian nature of the noise, plus a convex (possibly nonsmooth) regularizer (e.g., a total variation or wavelet-based regularizer/prior). In this paper, we address these difficulties by: (1) converting the multiplicative model into an additive one by taking logarithms, as proposed by some other authors; (2) using variable splitting to obtain an equivalent constrained problem; and (3) dealing with this optimization problem using the augmented Lagrangian framework. A set of experiments shows that the proposed method, which we name MIDAL (multiplicative image denoising by augmented Lagrangian), yields state-of-the-art results both in terms of speed and denoising performance.Comment: 11 pages, 7 figures, 2 tables. To appear in the IEEE Transactions on Image Processing

    Generating structured non-smooth priors and associated primal-dual methods

    Get PDF
    The purpose of the present chapter is to bind together and extend some recent developments regarding data-driven non-smooth regularization techniques in image processing through the means of a bilevel minimization scheme. The scheme, considered in function space, takes advantage of a dualization framework and it is designed to produce spatially varying regularization parameters adapted to the data for well-known regularizers, e.g. Total Variation and Total Generalized variation, leading to automated (monolithic), image reconstruction workflows. An inclusion of the theory of bilevel optimization and the theoretical background of the dualization framework, as well as a brief review of the aforementioned regularizers and their parameterization, makes this chapter a self-contained one. Aspects of the numerical implementation of the scheme are discussed and numerical examples are provided

    Jump-sparse and sparse recovery using Potts functionals

    Full text link
    We recover jump-sparse and sparse signals from blurred incomplete data corrupted by (possibly non-Gaussian) noise using inverse Potts energy functionals. We obtain analytical results (existence of minimizers, complexity) on inverse Potts functionals and provide relations to sparsity problems. We then propose a new optimization method for these functionals which is based on dynamic programming and the alternating direction method of multipliers (ADMM). A series of experiments shows that the proposed method yields very satisfactory jump-sparse and sparse reconstructions, respectively. We highlight the capability of the method by comparing it with classical and recent approaches such as TV minimization (jump-sparse signals), orthogonal matching pursuit, iterative hard thresholding, and iteratively reweighted â„“1\ell^1 minimization (sparse signals)

    Deconvolution under Poisson noise using exact data fidelity and synthesis or analysis sparsity priors

    Get PDF
    In this paper, we propose a Bayesian MAP estimator for solving the deconvolution problems when the observations are corrupted by Poisson noise. Towards this goal, a proper data fidelity term (log-likelihood) is introduced to reflect the Poisson statistics of the noise. On the other hand, as a prior, the images to restore are assumed to be positive and sparsely represented in a dictionary of waveforms such as wavelets or curvelets. Both analysis and synthesis-type sparsity priors are considered. Piecing together the data fidelity and the prior terms, the deconvolution problem boils down to the minimization of non-smooth convex functionals (for each prior). We establish the well-posedness of each optimization problem, characterize the corresponding minimizers, and solve them by means of proximal splitting algorithms originating from the realm of non-smooth convex optimization theory. Experimental results are conducted to demonstrate the potential applicability of the proposed algorithms to astronomical imaging datasets

    Controlled wavelet domain sparsity for x-ray tomography

    Get PDF
    Tomographic reconstruction is an ill-posed inverse problem that calls for regularization. One possibility is to require sparsity of the unknown in an orthonormal wavelet basis. This, in turn, can be achieved by variational regularization, where the penalty term is the sum of the absolute values of the wavelet coefficients. The primal-dual fixed point algorithm showed that the minimizer of the variational regularization functional can be computed iteratively using a soft-thresholding operation. Choosing the soft-thresholding parameter mu > 0 is analogous to the notoriously difficult problem of picking the optimal regularization parameter in Tikhonov regularization. Here, a novel automatic method is introduced for choosing mu, based on a control algorithm driving the sparsity of the reconstruction to an a priori known ratio of nonzero versus zero wavelet coefficients in the unknown.Peer reviewe

    Performance Comparison of Total Variation based Image Regularization Algorithms

    Get PDF
    The mathematical approach calculus of variation is commonly used to find an unknown function that minimizes or maximizes the functional. Retrieving the original image from the degraded one, such problems are called inverse problems. The most basic example for inverse problem is image denoising. Variational methods are formulated as optimization problems and provides a good solution to image denoising. Three such variational methods Tikhonov model, ROF model and Total Variation-L1 model for image denoising are studied and implemented. Performance of these variational algorithms are analyzed for different values of regularization parameter. It is found that small value of regularization parameter causes better noise removal whereas large value of regularization parameter preserves well sharp edges. The Euler’s Lagrangian equation corresponding to an energy functional used in variational methods is solved using gradient descent method and the resulting partial differential equation is solved using Euler’s forward finite difference method. The quality metrics are computed and the results are compared in this paper.

    Exact algorithms for L1L^1-TV regularization of real-valued or circle-valued signals

    Full text link
    We consider L1L^1-TV regularization of univariate signals with values on the real line or on the unit circle. While the real data space leads to a convex optimization problem, the problem is non-convex for circle-valued data. In this paper, we derive exact algorithms for both data spaces. A key ingredient is the reduction of the infinite search spaces to a finite set of configurations, which can be scanned by the Viterbi algorithm. To reduce the computational complexity of the involved tabulations, we extend the technique of distance transforms to non-uniform grids and to the circular data space. In total, the proposed algorithms have complexity O(KN)\mathscr{O}(KN) where NN is the length of the signal and KK is the number of different values in the data set. In particular, the complexity is O(N)\mathscr{O}(N) for quantized data. It is the first exact algorithm for TV regularization with circle-valued data, and it is competitive with the state-of-the-art methods for scalar data, assuming that the latter are quantized

    Implicit Fixed-point Proximity Framework for Optimization Problems and Its Applications

    Get PDF
    A variety of optimization problems especially in the field of image processing are not differentiable in nature. The non-differentiability of the objective functions together with the large dimension of the underlying images makes minimizing the objective function theoretically challenging and numerically difficult. The fixed-point proximity framework that we will systematically study in this dissertation provides a direct and unified methodology for finding solutions to those optimization problems. The framework approaches the models arising from applications straightforwardly by using various fixed point techniques as well as convex analysis tools such as the subdifferential and proximity operator. With the notion of proximity operator, we can convert those optimization problems into finding fixed points of nonlinear operators. Under the fixed-point proximity framework, these fixed point problems are often solved through iterative schemes in which each iteration can be computed in an explicit form. We further explore this fixed point formulation, and develop implicit iterative schemes for finding fixed points of nonlinear operators associated with the underlying problems, with the goal of relaxing restrictions in the development of solving the fixed point equations. Theoretical analysis is provided for the convergence of implicit algorithms proposed under the framework. The numerical experiments on image reconstruction models demonstrate that the proposed implicit fixed-point proximity algorithms work well in comparison with existing explicit fixed-point proximity algorithms in terms of the consumed computational time and accuracy of the solutions
    • …
    corecore