120 research outputs found
Deep Burst Denoising
Noise is an inherent issue of low-light image capture, one which is
exacerbated on mobile devices due to their narrow apertures and small sensors.
One strategy for mitigating noise in a low-light situation is to increase the
shutter time of the camera, thus allowing each photosite to integrate more
light and decrease noise variance. However, there are two downsides of long
exposures: (a) bright regions can exceed the sensor range, and (b) camera and
scene motion will result in blurred images. Another way of gathering more light
is to capture multiple short (thus noisy) frames in a "burst" and intelligently
integrate the content, thus avoiding the above downsides. In this paper, we use
the burst-capture strategy and implement the intelligent integration via a
recurrent fully convolutional deep neural net (CNN). We build our novel,
multiframe architecture to be a simple addition to any single frame denoising
model, and design to handle an arbitrary number of noisy input frames. We show
that it achieves state of the art denoising results on our burst dataset,
improving on the best published multi-frame techniques, such as VBM4D and
FlexISP. Finally, we explore other applications of image enhancement by
integrating content from multiple frames and demonstrate that our DNN
architecture generalizes well to image super-resolution
A Comparison of Image Denoising Methods
The advancement of imaging devices and countless images generated everyday
pose an increasingly high demand on image denoising, which still remains a
challenging task in terms of both effectiveness and efficiency. To improve
denoising quality, numerous denoising techniques and approaches have been
proposed in the past decades, including different transforms, regularization
terms, algebraic representations and especially advanced deep neural network
(DNN) architectures. Despite their sophistication, many methods may fail to
achieve desirable results for simultaneous noise removal and fine detail
preservation. In this paper, to investigate the applicability of existing
denoising techniques, we compare a variety of denoising methods on both
synthetic and real-world datasets for different applications. We also introduce
a new dataset for benchmarking, and the evaluations are performed from four
different perspectives including quantitative metrics, visual effects, human
ratings and computational cost. Our experiments demonstrate: (i) the
effectiveness and efficiency of representative traditional denoisers for
various denoising tasks, (ii) a simple matrix-based algorithm may be able to
produce similar results compared with its tensor counterparts, and (iii) the
notable achievements of DNN models, which exhibit impressive generalization
ability and show state-of-the-art performance on various datasets. In spite of
the progress in recent years, we discuss shortcomings and possible extensions
of existing techniques. Datasets, code and results are made publicly available
and will be continuously updated at
https://github.com/ZhaomingKong/Denoising-Comparison.Comment: In this paper, we intend to collect and compare various denoising
methods to investigate their effectiveness, efficiency, applicability and
generalization ability with both synthetic and real-world experiment
Denoising Low-Dose CT Images using Multi-frame techniques
This study examines potential methods of achieving a reduction in X-ray radiation dose of Computer
Tomography (CT) using multi-frame low-dose CT images. Even though a single-frame low-dose CT image
is not very diagnostically useful due to excessive noise, we have found that by using multi-frame low-dose
CT images we can denoise these low-dose CT images quite significantly at lower radiation dose. We have
proposed two approaches leveraging these multi-frame low-dose CT denoising techniques.
In our first method, we proposed a blind source separation (BSS) based CT image method using a multiframe low-dose image sequence. By using BSS technique, we estimated the independent image component and
noise components from the image sequences. The extracted image component then is further donoised using
a nonlocal groupwise denoiser named BM3D that used the mean standard deviation of the noise components.
We have also proposed an extension of this method using a window splitting technique.
In our second method, we leveraged the power of deep learning to introduce a collaborative technique
to train multiple Noise2Noise generators simultaneously and learn the image representation from LDCT
images. We presented three models using this Collaborative Network (CN) principle employing two generators
(CN2G), three generators (CN3G), and hybrid three generators (HCN3G) consisting of BSS denoiser with
one of the CN generators. The CN3G model showed better performance than the CN2G model in terms of
denoised image quality at the expense of an additional LDCT image. The HCN3G model took the advantages
of both these models by managing to train three collaborative generators using only two LDCT images by
leveraging our first proposed method using blind source separation (BSS) and block matching 3-D (BM3D)
filter.
By using these multi-frame techniques, we can reduce the radiation dosage quite significantly without
losing significant image details, especially for low-contrast areas. Amongst our all methods, the HCN3G
model performs the best in terms of PSNR, SSIM, and material noise characteristics, while CN2G and CN3G
perform better in terms of contrast difference. In HCN3G model, we have combined two of our methods in a
single technique. In addition, we have introduced Collaborative Network (CN) and collaborative loss terms
in the L2 losses calculation in our second method which is a significant contribution of this research study
- …