586 research outputs found

    Convolutional Deblurring for Natural Imaging

    Full text link
    In this paper, we propose a novel design of image deblurring in the form of one-shot convolution filtering that can directly convolve with naturally blurred images for restoration. The problem of optical blurring is a common disadvantage to many imaging applications that suffer from optical imperfections. Despite numerous deconvolution methods that blindly estimate blurring in either inclusive or exclusive forms, they are practically challenging due to high computational cost and low image reconstruction quality. Both conditions of high accuracy and high speed are prerequisites for high-throughput imaging platforms in digital archiving. In such platforms, deblurring is required after image acquisition before being stored, previewed, or processed for high-level interpretation. Therefore, on-the-fly correction of such images is important to avoid possible time delays, mitigate computational expenses, and increase image perception quality. We bridge this gap by synthesizing a deconvolution kernel as a linear combination of Finite Impulse Response (FIR) even-derivative filters that can be directly convolved with blurry input images to boost the frequency fall-off of the Point Spread Function (PSF) associated with the optical blur. We employ a Gaussian low-pass filter to decouple the image denoising problem for image edge deblurring. Furthermore, we propose a blind approach to estimate the PSF statistics for two Gaussian and Laplacian models that are common in many imaging pipelines. Thorough experiments are designed to test and validate the efficiency of the proposed method using 2054 naturally blurred images across six imaging applications and seven state-of-the-art deconvolution methods.Comment: 15 pages, for publication in IEEE Transaction Image Processin

    Modular Regularization Algorithms

    Get PDF

    Lagrangian methods for the regularization of discrete ill-posed problems

    Get PDF
    In many science and engineering applications, the discretization of linear ill-posed problems gives rise to large ill-conditioned linear systems with right-hand side degraded by noise. The solution of such linear systems requires the solution of a minimization problem with one quadratic constraint depending on an estimate of the variance of the noise. This strategy is known as regularization. In this work, we propose to use Lagrangian methods for the solution of the noise constrained regularization problem. Moreover, we introduce a new method based on Lagrangian methods and the discrepancy principle. We present numerical results on numerous test problems, image restoration and medical imaging denoising. Our results indicate that the proposed strategies are effective and efficient in computing good regularized solutions of ill-conditioned linear systems as well as the corresponding regularization parameters. Therefore, the proposed methods are actually a promising approach to deal with ill-posed problems

    Risk estimators for choosing regularization parameters in ill-posed problems - Properties and limitations

    Get PDF
    This paper discusses the properties of certain risk estimators that recently regained popularity for choosing regularization parameters in ill-posed problems, in particular for sparsity regularization. They apply Stein’s unbiased risk estimator (SURE) to estimate the risk in either the space of the unknown variables or in the data space. We will call the latter PSURE in order to distinguish the two different risk functions. It seems intuitive that SURE is more appropriate for ill-posed problems, since the properties in the data space do not tell much about the quality of the reconstruction. We provide theoretical studies of both approaches for linear Tikhonov regularization in a finite dimensional setting and estimate the quality of the risk estimators, which also leads to asymptotic convergence results as the dimension of the problem tends to infinity. Unlike previous works which studied single realizations of image processing problems with a very low degree of ill-posedness, we are interested in the statistical behaviour of the risk estimators for increasing ill-posedness. Interestingly, our theoretical results indicate that the quality of the SURE risk can deteriorate asymptotically for ill-posed problems, which is confirmed by an extensive numerical study. The latter shows that in many cases the SURE estimator leads to extremely small regularization parameters, which obviously cannot stabilize the reconstruction. Similar but less severe issues with respect to robustness also appear for the PSURE estimator, which in comparison to the rather conservative discrepancy principle leads to the conclusion that regularization parameter choice based on unbiased risk estimation is not a reliable procedure for ill-posed problems. A similar numerical study for sparsity regularization demonstrates that the same issue appears in non-linear variational regularization approaches

    Effective and Efficient Reconstruction Schemes for the Inverse Medium Problem in Scattering

    Get PDF
    This thesis challenges with the development of a computational framework facilitating the solution for the inverse medium problem in time-independent scattering in two- and three-dimensional setting. This includes three main application cases: the simulation of the scattered field for a given transmitter-receiver geometry; the generation of simulated data as well as the handling of real-world data; the reconstruction of the refractive index of a penetrable medium from several measured, scattered fields. We focus on an effective and efficient reconstruction algorithm. Therefore we set up a variational reconstruction scheme. The underlying paradigm is to minimize the discrepancy between the predicted data based on the reconstructed refractive index and the given data while taking into account various structural a priori information via suitable penalty terms, which are designed to promote information expected in real-world environments. Finally, the scheme relies on a primal-dual algorithm. In addition, information about the obstacle's shape and position obtained by the factorization method can be used as a priori information to increase the overall effectiveness of the scheme. An implementation is provided as MATLAB toolbox IPscatt. It is tailored to the needs of practitioners, e.g. a heuristic algorithm for an automatic, data-driven choice of the regularization parameters is available. The effectiveness and efficiency of the proposed approach are demonstrated for simulated as well as real-world data by comparisons with existing software packages

    Deep Structured Layers for Instance-Level Optimization in 2D and 3D Vision

    Get PDF
    The approach we present in this thesis is that of integrating optimization problems as layers in deep neural networks. Optimization-based modeling provides an additional set of tools enabling the design of powerful neural networks for a wide battery of computer vision tasks. This thesis shows formulations and experiments for vision tasks ranging from image reconstruction to 3D reconstruction. We first propose an unrolled optimization method with implicit regularization properties for reconstructing images from noisy camera readings. The method resembles an unrolled majorization minimization framework with convolutional neural networks acting as regularizers. We report state-of-the-art performance in image reconstruction on both noisy and noise-free evaluation setups across many datasets. We further focus on the task of monocular 3D reconstruction of articulated objects using video self-supervision. The proposed method uses a structured layer for accurate object deformation that controls a 3D surface by displacing a small number of learnable handles. While relying on a small set of training data per category for self-supervision, the method obtains state-of-the-art reconstruction accuracy with diverse shapes and viewpoints for multiple articulated objects. We finally address the shortcomings of the previous method that revolve around regressing the camera pose using multiple hypotheses. We propose a method that recovers a 3D shape from a 2D image by relying solely on 3D-2D correspondences regressed from a convolutional neural network. These correspondences are used in conjunction with an optimization problem to estimate per sample the camera pose and deformation. We quantitatively show the effectiveness of the proposed method on self-supervised 3D reconstruction on multiple categories without the need for multiple hypotheses

    Variational and learning models for image and time series inverse problems

    Get PDF
    Inverse problems are at the core of many challenging applications. Variational and learning models provide estimated solutions of inverse problems as the outcome of specific reconstruction maps. In the variational approach, the result of the reconstruction map is the solution of a regularized minimization problem encoding information on the acquisition process and prior knowledge on the solution. In the learning approach, the reconstruction map is a parametric function whose parameters are identified by solving a minimization problem depending on a large set of data. In this thesis, we go beyond this apparent dichotomy between variational and learning models and we show they can be harmoniously merged in unified hybrid frameworks preserving their main advantages. We develop several highly efficient methods based on both these model-driven and data-driven strategies, for which we provide a detailed convergence analysis. The arising algorithms are applied to solve inverse problems involving images and time series. For each task, we show the proposed schemes improve the performances of many other existing methods in terms of both computational burden and quality of the solution. In the first part, we focus on gradient-based regularized variational models which are shown to be effective for segmentation purposes and thermal and medical image enhancement. We consider gradient sparsity-promoting regularized models for which we develop different strategies to estimate the regularization strength. Furthermore, we introduce a novel gradient-based Plug-and-Play convergent scheme considering a deep learning based denoiser trained on the gradient domain. In the second part, we address the tasks of natural image deblurring, image and video super resolution microscopy and positioning time series prediction, through deep learning based methods. We boost the performances of supervised, such as trained convolutional and recurrent networks, and unsupervised deep learning strategies, such as Deep Image Prior, by penalizing the losses with handcrafted regularization terms
    • …
    corecore