13,058 research outputs found

    Solving Inverse Problems with Piecewise Linear Estimators: From Gaussian Mixture Models to Structured Sparsity

    Full text link
    A general framework for solving image inverse problems is introduced in this paper. The approach is based on Gaussian mixture models, estimated via a computationally efficient MAP-EM algorithm. A dual mathematical interpretation of the proposed framework with structured sparse estimation is described, which shows that the resulting piecewise linear estimate stabilizes the estimation when compared to traditional sparse inverse problem techniques. This interpretation also suggests an effective dictionary motivated initialization for the MAP-EM algorithm. We demonstrate that in a number of image inverse problems, including inpainting, zooming, and deblurring, the same algorithm produces either equal, often significantly better, or very small margin worse results than the best published ones, at a lower computational cost.Comment: 30 page

    Image Restoration using Total Variation Regularized Deep Image Prior

    Full text link
    In the past decade, sparsity-driven regularization has led to significant improvements in image reconstruction. Traditional regularizers, such as total variation (TV), rely on analytical models of sparsity. However, increasingly the field is moving towards trainable models, inspired from deep learning. Deep image prior (DIP) is a recent regularization framework that uses a convolutional neural network (CNN) architecture without data-driven training. This paper extends the DIP framework by combining it with the traditional TV regularization. We show that the inclusion of TV leads to considerable performance gains when tested on several traditional restoration tasks such as image denoising and deblurring

    Sparse Modeling for Image and Vision Processing

    Get PDF
    In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection---that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics and Visio

    Advanced Denoising for X-ray Ptychography

    Get PDF
    The success of ptychographic imaging experiments strongly depends on achieving high signal-to-noise ratio. This is particularly important in nanoscale imaging experiments when diffraction signals are very weak and the experiments are accompanied by significant parasitic scattering (background), outliers or correlated noise sources. It is also critical when rare events such as cosmic rays, or bad frames caused by electronic glitches or shutter timing malfunction take place. In this paper, we propose a novel iterative algorithm with rigorous analysis that exploits the direct forward model for parasitic noise and sample smoothness to achieve a thorough characterization and removal of structured and random noise. We present a formal description of the proposed algorithm and prove its convergence under mild conditions. Numerical experiments from simulations and real data (both soft and hard X-ray beamlines) demonstrate that the proposed algorithms produce better results when compared to state-of-the-art methods.Comment: 24 pages, 9 figure
    • …
    corecore