479 research outputs found

    Efficient Blind Deblurring under High Noise Levels

    Full text link
    The goal of blind image deblurring is to recover a sharp image from a motion blurred one without knowing the camera motion. Current state-of-the-art methods have a remarkably good performance on images with no noise or very low noise levels. However, the noiseless assumption is not realistic considering that low light conditions are the main reason for the presence of motion blur due to requiring longer exposure times. In fact, motion blur and high to moderate noise often appear together. Most works approach this problem by first estimating the blur kernel kk and then deconvolving the noisy blurred image. In this work, we first show that current state-of-the-art kernel estimation methods based on the â„“0\ell_0 gradient prior can be adapted to handle high noise levels while keeping their efficiency. Then, we show that a fast non-blind deconvolution method can be significantly improved by first denoising the blurry image. The proposed approach yields results that are equivalent to those obtained with much more computationally demanding methods

    Deep-URL: A Model-Aware Approach To Blind Deconvolution Based On Deep Unfolded Richardson-Lucy Network

    Full text link
    The lack of interpretability in current deep learning models causes serious concerns as they are extensively used for various life-critical applications. Hence, it is of paramount importance to develop interpretable deep learning models. In this paper, we consider the problem of blind deconvolution and propose a novel model-aware deep architecture that allows for the recovery of both the blur kernel and the sharp image from the blurred image. In particular, we propose the Deep Unfolded Richardson-Lucy (Deep-URL) framework -- an interpretable deep-learning architecture that can be seen as an amalgamation of classical estimation technique and deep neural network, and consequently leads to improved performance. Our numerical investigations demonstrate significant improvement compared to state-of-the-art algorithms.Comment: Accepted. 27th IEEE International Conference on Image Processing (ICIP), 202

    Blind Deconvolution with Non-local Sparsity Reweighting

    Full text link
    Blind deconvolution has made significant progress in the past decade. Most successful algorithms are classified either as Variational or Maximum a-Posteriori (MAPMAP). In spite of the superior theoretical justification of variational techniques, carefully constructed MAPMAP algorithms have proven equally effective in practice. In this paper, we show that all successful MAPMAP and variational algorithms share a common framework, relying on the following key principles: sparsity promotion in the gradient domain, l2l_2 regularization for kernel estimation, and the use of convex (often quadratic) cost functions. Our observations lead to a unified understanding of the principles required for successful blind deconvolution. We incorporate these principles into a novel algorithm that improves significantly upon the state of the art.Comment: 19 page

    Non-Uniform Blind Deblurring with a Spatially-Adaptive Sparse Prior

    Full text link
    Typical blur from camera shake often deviates from the standard uniform convolutional script, in part because of problematic rotations which create greater blurring away from some unknown center point. Consequently, successful blind deconvolution requires the estimation of a spatially-varying or non-uniform blur operator. Using ideas from Bayesian inference and convex analysis, this paper derives a non-uniform blind deblurring algorithm with several desirable, yet previously-unexplored attributes. The underlying objective function includes a spatially adaptive penalty which couples the latent sharp image, non-uniform blur operator, and noise level together. This coupling allows the penalty to automatically adjust its shape based on the estimated degree of local blur and image structure such that regions with large blur or few prominent edges are discounted. Remaining regions with modest blur and revealing edges therefore dominate the overall estimation process without explicitly incorporating structure-selection heuristics. The algorithm can be implemented using a majorization-minimization strategy that is virtually parameter free. Detailed theoretical analysis and empirical validation on real images serve to validate the proposed method

    Modeling Realistic Degradations in Non-blind Deconvolution

    Full text link
    Most image deblurring methods assume an over-simplistic image formation model and as a result are sensitive to more realistic image degradations. We propose a novel variational framework, that explicitly handles pixel saturation, noise, quantization, as well as non-linear camera response function due to e.g., gamma correction. We show that accurately modeling a more realistic image acquisition pipeline leads to significant improvements, both in terms of image quality and PSNR. Furthermore, we show that incorporating the non-linear response in both the data and the regularization terms of the proposed energy leads to a more detailed restoration than a naive inversion of the non-linear curve. The minimization of the proposed energy is performed using stochastic optimization. A dataset consisting of realistically degraded images is created in order to evaluate the method.Comment: Accepted at the 2018 IEEE International Conference on Image Processing (ICIP 2018

    Learn to Model Motion from Blurry Footages

    Full text link
    It is difficult to recover the motion field from a real-world footage given a mixture of camera shake and other photometric effects. In this paper we propose a hybrid framework by interleaving a Convolutional Neural Network (CNN) and a traditional optical flow energy. We first conduct a CNN architecture using a novel learnable directional filtering layer. Such layer encodes the angle and distance similarity matrix between blur and camera motion, which is able to enhance the blur features of the camera-shake footages. The proposed CNNs are then integrated into an iterative optical flow framework, which enable the capability of modelling and solving both the blind deconvolution and the optical flow estimation problems simultaneously. Our framework is trained end-to-end on a synthetic dataset and yields competitive precision and performance against the state-of-the-art approaches.Comment: Preprint of our paper accepted by Pattern Recognitio

    Blind Deconvolution Microscopy Using Cycle Consistent CNN with Explicit PSF Layer

    Full text link
    Deconvolution microscopy has been extensively used to improve the resolution of the widefield fluorescent microscopy. Conventional approaches, which usually require the point spread function (PSF) measurement or blind estimation, are however computationally expensive. Recently, CNN based approaches have been explored as a fast and high performance alternative. In this paper, we present a novel unsupervised deep neural network for blind deconvolution based on cycle consistency and PSF modeling layers. In contrast to the recent CNN approaches for similar problem, the explicit PSF modeling layers improve the robustness of the algorithm. Experimental results confirm the efficacy of the algorithm

    Iterative Residual Image Deconvolution

    Full text link
    Image deblurring, a.k.a. image deconvolution, recovers a clear image from pixel superposition caused by blur degradation. Few deep convolutional neural networks (CNN) succeed in addressing this task. In this paper, we first demonstrate that the minimum-mean-square-error (MMSE) solution to image deblurring can be interestingly unfolded into a series of residual components. Based on this analysis, we propose a novel iterative residual deconvolution (IRD) algorithm. Further, IRD motivates us to take one step forward to design an explicable and effective CNN architecture for image deconvolution. Specifically, a sequence of residual CNN units are deployed, whose intermediate outputs are then concatenated and integrated, resulting in concatenated residual convolutional network (CRCNet). The experimental results demonstrate that proposed CRCNet not only achieves better quantitative metrics but also recovers more visually plausible texture details compared with state-of-the-art methods.Comment: rejected by AAAI 201

    A Robust Variational Model for Positive Image Deconvolution

    Full text link
    In this paper, an iterative method for robust deconvolution with positivity constraints is discussed. It is based on the known variational interpretation of the Richardson-Lucy iterative deconvolution as fixed-point iteration for the minimisation of an information divergence functional under a multiplicative perturbation model. The asymmetric penaliser function involved in this functional is then modified into a robust penaliser, and complemented with a regulariser. The resulting functional gives rise to a fixed point iteration that we call robust and regularised Richardson-Lucy deconvolution. It achieves an image restoration quality comparable to state-of-the-art robust variational deconvolution with a computational efficiency similar to that of the original Richardson-Lucy method. Experiments on synthetic and real-world image data demonstrate the performance of the proposed method

    Blind image deblurring using class-adapted image priors

    Full text link
    Blind image deblurring (BID) is an ill-posed inverse problem, usually addressed by imposing prior knowledge on the (unknown) image and on the blurring filter. Most of the work on BID has focused on natural images, using image priors based on statistical properties of generic natural images. However, in many applications, it is known that the image being recovered belongs to some specific class (e.g., text, face, fingerprints), and exploiting this knowledge allows obtaining more accurate priors. In this work, we propose a method where a Gaussian mixture model (GMM) is used to learn a class-adapted prior, by training on a dataset of clean images of that class. Experiments show the competitiveness of the proposed method in terms of restoration quality when dealing with images containing text, faces, or fingerprints. Additionally, experiments show that the proposed method is able to handle text images at high noise levels, outperforming state-of-the-art methods specifically designed for BID of text images.Comment: 5 page
    • …
    corecore