3 research outputs found
Implementation of the VBM3D Video Denoising Method and Some Variants
VBM3D is an extension to video of the well known image denoising algorithm
BM3D, which takes advantage of the sparse representation of stacks of similar
patches in a transform domain. The extension is rather straightforward: the
similar 2D patches are taken from a spatio-temporal neighborhood which includes
neighboring frames. In spite of its simplicity, the algorithm offers a good
trade-off between denoising performance and computational complexity. In this
work we revisit this method, providing an open-source C++ implementation
reproducing the results. A detailed description is given and the choice of
parameters is thoroughly discussed. Furthermore, we discuss several extensions
of the original algorithm: (1) a multi-scale implementation, (2) the use of 3D
patches, (3) the use of optical flow to guide the patch search. These
extensions allow to obtain results which are competitive with even the most
recent state of the art.Comment: 18 pages, 7 figures, 5 table
Efficient Blind Deblurring under High Noise Levels
The goal of blind image deblurring is to recover a sharp image from a motion
blurred one without knowing the camera motion. Current state-of-the-art methods
have a remarkably good performance on images with no noise or very low noise
levels. However, the noiseless assumption is not realistic considering that low
light conditions are the main reason for the presence of motion blur due to
requiring longer exposure times. In fact, motion blur and high to moderate
noise often appear together. Most works approach this problem by first
estimating the blur kernel and then deconvolving the noisy blurred image.
In this work, we first show that current state-of-the-art kernel estimation
methods based on the gradient prior can be adapted to handle high
noise levels while keeping their efficiency. Then, we show that a fast
non-blind deconvolution method can be significantly improved by first denoising
the blurry image. The proposed approach yields results that are equivalent to
those obtained with much more computationally demanding methods
Non-Local Video Denoising by CNN
Non-local patch based methods were until recently state-of-the-art for image
denoising but are now outperformed by CNNs. Yet they are still the
state-of-the-art for video denoising, as video redundancy is a key factor to
attain high denoising performance. The problem is that CNN architectures are
hardly compatible with the search for self-similarities. In this work we
propose a new and efficient way to feed video self-similarities to a CNN. The
non-locality is incorporated into the network via a first non-trainable layer
which finds for each patch in the input image its most similar patches in a
search region. The central values of these patches are then gathered in a
feature vector which is assigned to each image pixel. This information is
presented to a CNN which is trained to predict the clean image. We apply the
proposed architecture to image and video denoising. For the latter patches are
searched for in a 3D spatio-temporal volume. The proposed architecture achieves
state-of-the-art results. To the best of our knowledge, this is the first
successful application of a CNN to video denoising.Comment: A shorter version of this work has been accepted at ICIP 2019 (A
NON-LOCAL CNN FOR VIDEO DENOISING). The results of v2 were improved compared
to v1 and the code was updated accordingly. Code is available at:
https://github.com/axeldavy/vnlne