17,525 research outputs found

    Multiplicative Noise Removal Using L1 Fidelity on Frame Coefficients

    Get PDF
    We address the denoising of images contaminated with multiplicative noise, e.g. speckle noise. Classical ways to solve such problems are filtering, statistical (Bayesian) methods, variational methods, and methods that convert the multiplicative noise into additive noise (using a logarithmic function), shrinkage of the coefficients of the log-image data in a wavelet basis or in a frame, and transform back the result using an exponential function. We propose a method composed of several stages: we use the log-image data and apply a reasonable under-optimal hard-thresholding on its curvelet transform; then we apply a variational method where we minimize a specialized criterion composed of an 1\ell^1 data-fitting to the thresholded coefficients and a Total Variation regularization (TV) term in the image domain; the restored image is an exponential of the obtained minimizer, weighted in a way that the mean of the original image is preserved. Our restored images combine the advantages of shrinkage and variational methods and avoid their main drawbacks. For the minimization stage, we propose a properly adapted fast minimization scheme based on Douglas-Rachford splitting. The existence of a minimizer of our specialized criterion being proven, we demonstrate the convergence of the minimization scheme. The obtained numerical results outperform the main alternative methods

    Euclid in a Taxicab: Sparse Blind Deconvolution with Smoothed l1/l2 Regularization

    Get PDF
    The l1/l2 ratio regularization function has shown good performance for retrieving sparse signals in a number of recent works, in the context of blind deconvolution. Indeed, it benefits from a scale invariance property much desirable in the blind context. However, the l1/l2 function raises some difficulties when solving the nonconvex and nonsmooth minimization problems resulting from the use of such a penalty term in current restoration methods. In this paper, we propose a new penalty based on a smooth approximation to the l1/l2 function. In addition, we develop a proximal-based algorithm to solve variational problems involving this function and we derive theoretical convergence results. We demonstrate the effectiveness of our method through a comparison with a recent alternating optimization strategy dealing with the exact l1/l2 term, on an application to seismic data blind deconvolution.Comment: 5 page

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented.Comment: 53 pages, 17 figure

    Evaluating Wetland Expansion In A Tallgrass Prairie-Wetland Restoration

    Get PDF
    Remote sensing is an effective tool to inventory and monitor wetlands at large spatial scales. This study examined the effect of wetland restoration practices at Glacial Ridge National Wildlife Refuge (GRNWR) in northwest Minnesota on the distribution, location, size and temporal changes of wetlands. A Geographic Object-Based Image Analysis (GEOBIA) land cover classification method was applied that integrated spectral data, LiDAR elevation, and LiDAR derived ancillary data of slope, aspect, and TWI. Accuracy of remote wetland mapping was compared with onsite wetland delineation. The GEOBIA method produced land cover classifications with high overall accuracy (88 – 91 percent). Wetland area from a June 12, 2007 classified image was 20.09 km2 out of a total area of 147.3 km2. Classification of a July 22, 2014 image, showed wetlands covering an area of 37.96 km2. The results illustrate how wetland areas have changed spatially and temporally within the study landscape. These changes in hydrologic conditions encourage additional wetland development and expansion as plant communities colonize rewetted areas, and soil conditions develop characteristics typical of hydric soils

    Playing with Duality: An Overview of Recent Primal-Dual Approaches for Solving Large-Scale Optimization Problems

    Full text link
    Optimization methods are at the core of many problems in signal/image processing, computer vision, and machine learning. For a long time, it has been recognized that looking at the dual of an optimization problem may drastically simplify its solution. Deriving efficient strategies which jointly brings into play the primal and the dual problems is however a more recent idea which has generated many important new contributions in the last years. These novel developments are grounded on recent advances in convex analysis, discrete optimization, parallel processing, and non-smooth optimization with emphasis on sparsity issues. In this paper, we aim at presenting the principles of primal-dual approaches, while giving an overview of numerical methods which have been proposed in different contexts. We show the benefits which can be drawn from primal-dual algorithms both for solving large-scale convex optimization problems and discrete ones, and we provide various application examples to illustrate their usefulness

    A Total Fractional-Order Variation Model for Image Restoration with Non-homogeneous Boundary Conditions and its Numerical Solution

    Get PDF
    To overcome the weakness of a total variation based model for image restoration, various high order (typically second order) regularization models have been proposed and studied recently. In this paper we analyze and test a fractional-order derivative based total α\alpha-order variation model, which can outperform the currently popular high order regularization models. There exist several previous works using total α\alpha-order variations for image restoration; however first no analysis is done yet and second all tested formulations, differing from each other, utilize the zero Dirichlet boundary conditions which are not realistic (while non-zero boundary conditions violate definitions of fractional-order derivatives). This paper first reviews some results of fractional-order derivatives and then analyzes the theoretical properties of the proposed total α\alpha-order variational model rigorously. It then develops four algorithms for solving the variational problem, one based on the variational Split-Bregman idea and three based on direct solution of the discretise-optimization problem. Numerical experiments show that, in terms of restoration quality and solution efficiency, the proposed model can produce highly competitive results, for smooth images, to two established high order models: the mean curvature and the total generalized variation.Comment: 26 page
    corecore