121 research outputs found

    Application of Blind Deblurring Reconstruction Technique to SPECT Imaging

    Get PDF
    An SPECT image can be approximated as the convolution of the ground truth spatial radioactivity with the system point spread function (PSF). The PSF of an SPECT system is determined by the combined effect of several factors, including the gamma camera PSF, scattering, attenuation, and collimator response. It is hard to determine the SPECT system PSF analytically, although it may be measured experimentally. We formulated a blind deblurring reconstruction algorithm to estimate both the spatial radioactivity distribution and the system PSF from the set of blurred projection images. The algorithm imposes certain spatial-frequency domain constraints on the reconstruction volume and the PSF and does not otherwise assume knowledge of the PSF. The algorithm alternates between two iterative update sequences that correspond to the PSF and radioactivity estimations, respectively. In simulations and a small-animal study, the algorithm reduced image blurring and preserved the edges without introducing extra artifacts. The localized measurement shows that the reconstruction efficiency of SPECT images improved more than 50% compared to conventional expectation maximization (EM) reconstruction. In experimental studies, the contrast and quality of reconstruction was substantially improved with the blind deblurring reconstruction algorithm

    Blind Deblurring Reconstruction Technique with Applications in PET Imaging

    Get PDF
    We developed an empirical PET model taking into account system blurring and a blind iterative reconstruction scheme that estimates both the actual image and the point spread function of the system. Reconstruction images of high quality can be acquired by using the proposed reconstruction technique for both synthetic and experimental data. In the synthetic data study, the algorithm reduces image blurring and preserves the edges without introducing extra artifacts. The localized measurement shows that the performance of the reconstruction image improved by up to 100%. In experimental data studies, the contrast and quality of reconstruction is substantially improved. The proposed method shows promise in tumor localization and quantification

    Learning Deep CNN Denoiser Prior for Image Restoration

    Full text link
    Model-based optimization methods and discriminative learning methods have been the two dominant strategies for solving various inverse problems in low-level vision. Typically, those two kinds of methods have their respective merits and drawbacks, e.g., model-based optimization methods are flexible for handling different inverse problems but are usually time-consuming with sophisticated priors for the purpose of good performance; in the meanwhile, discriminative learning methods have fast testing speed but their application range is greatly restricted by the specialized task. Recent works have revealed that, with the aid of variable splitting techniques, denoiser prior can be plugged in as a modular part of model-based optimization methods to solve other inverse problems (e.g., deblurring). Such an integration induces considerable advantage when the denoiser is obtained via discriminative learning. However, the study of integration with fast discriminative denoiser prior is still lacking. To this end, this paper aims to train a set of fast and effective CNN (convolutional neural network) denoisers and integrate them into model-based optimization method to solve other inverse problems. Experimental results demonstrate that the learned set of denoisers not only achieve promising Gaussian denoising results but also can be used as prior to deliver good performance for various low-level vision applications.Comment: Accepted to CVPR 2017. Code: https://github.com/cszn/ircn

    Deep Model-Based Super-Resolution with Non-uniform Blur

    Full text link
    We propose a state-of-the-art method for super-resolution with non-uniform blur. Single-image super-resolution methods seek to restore a high-resolution image from blurred, subsampled, and noisy measurements. Despite their impressive performance, existing techniques usually assume a uniform blur kernel. Hence, these techniques do not generalize well to the more general case of non-uniform blur. Instead, in this paper, we address the more realistic and computationally challenging case of spatially-varying blur. To this end, we first propose a fast deep plug-and-play algorithm, based on linearized ADMM splitting techniques, which can solve the super-resolution problem with spatially-varying blur. Second, we unfold our iterative algorithm into a single network and train it end-to-end. In this way, we overcome the intricacy of manually tuning the parameters involved in the optimization scheme. Our algorithm presents remarkable performance and generalizes well after a single training to a large family of spatially-varying blur kernels, noise levels and scale factors

    Probabilistic modeling and inference for sequential space-varying blur identification

    Get PDF
    International audienceThe identification of parameters of spatially variant blurs given a clean image and its blurry noisy version is a challenging inverse problem of interest in many application fields, such as biological microscopy and astronomical imaging. In this paper, we consider a parametric model of the blur and introduce an 1D state-space model to describe the statistical dependence among the neighboring kernels. We apply a Bayesian approach to estimate the posterior distribution of the kernel parameters given the available data. Since this posterior is intractable for most realistic models, we propose to approximate it through a sequential Monte Carlo approach by processing all data in a sequential and efficient manner. Additionally, we propose a new sampling method to alleviate the particle degeneracy problem, which is present in approximate Bayesian filtering, particularly in challenging concentrated posterior distributions. The considered method allows us to process sequentially image patches at a reasonable computational and memory costs. Moreover, the probabilistic approach we adopt in this paper provides uncertainty quantification which is useful for image restoration. The practical experimental results illustrate the improved estimation performance of our novel approach, demonstrating also the benefits of exploiting the spatial structure the parametric blurs in the considered models

    Filter Design and Applications in Image Improvement

    Get PDF
    This work presents the performance analysis of different basic techniques used for the image restoration. Restoration is a process by which an image suffering from degradation can be recovered to its original form. Removing the noise from the image is the scope of this work. The work implemented different techniques of image enhancement and noise removal. The degraded images have been restored by the use of different mathematical filters. A new approach using MATLAB software was designed to improve the image and suppress the noise. The code was executed to eliminate the image degradation and avoid the loss of information. The use of the code enables easy extraction of data from the images
    • 

    corecore