5,650 research outputs found
Deep residual learning in CT physics: scatter correction for spectral CT
Recently, spectral CT has been drawing a lot of attention in a variety of
clinical applications primarily due to its capability of providing quantitative
information about material properties. The quantitative integrity of the
reconstructed data depends on the accuracy of the data corrections applied to
the measurements. Scatter correction is a particularly sensitive correction in
spectral CT as it depends on system effects as well as the object being imaged
and any residual scatter is amplified during the non-linear material
decomposition. An accurate way of removing scatter is subtracting the scatter
estimated by Monte Carlo simulation. However, to get sufficiently good scatter
estimates, extremely large numbers of photons is required, which may lead to
unexpectedly high computational costs. Other approaches model scatter as a
convolution operation using kernels derived using empirical methods. These
techniques have been found to be insufficient in spectral CT due to their
inability to sufficiently capture object dependence. In this work, we develop a
deep residual learning framework to address both issues of computation
simplicity and object dependency. A deep convolution neural network is trained
to determine the scatter distribution from the projection content in training
sets. In test cases of a digital anthropomorphic phantom and real water
phantom, we demonstrate that with much lower computing costs, the proposed
network provides sufficiently accurate scatter estimation
Convolutional Deblurring for Natural Imaging
In this paper, we propose a novel design of image deblurring in the form of
one-shot convolution filtering that can directly convolve with naturally
blurred images for restoration. The problem of optical blurring is a common
disadvantage to many imaging applications that suffer from optical
imperfections. Despite numerous deconvolution methods that blindly estimate
blurring in either inclusive or exclusive forms, they are practically
challenging due to high computational cost and low image reconstruction
quality. Both conditions of high accuracy and high speed are prerequisites for
high-throughput imaging platforms in digital archiving. In such platforms,
deblurring is required after image acquisition before being stored, previewed,
or processed for high-level interpretation. Therefore, on-the-fly correction of
such images is important to avoid possible time delays, mitigate computational
expenses, and increase image perception quality. We bridge this gap by
synthesizing a deconvolution kernel as a linear combination of Finite Impulse
Response (FIR) even-derivative filters that can be directly convolved with
blurry input images to boost the frequency fall-off of the Point Spread
Function (PSF) associated with the optical blur. We employ a Gaussian low-pass
filter to decouple the image denoising problem for image edge deblurring.
Furthermore, we propose a blind approach to estimate the PSF statistics for two
Gaussian and Laplacian models that are common in many imaging pipelines.
Thorough experiments are designed to test and validate the efficiency of the
proposed method using 2054 naturally blurred images across six imaging
applications and seven state-of-the-art deconvolution methods.Comment: 15 pages, for publication in IEEE Transaction Image Processin
- …