32 research outputs found
Rethinking the Pipeline of Demosaicing, Denoising and Super-Resolution
Incomplete color sampling, noise degradation, and limited resolution are the
three key problems that are unavoidable in modern camera systems. Demosaicing
(DM), denoising (DN), and super-resolution (SR) are core components in a
digital image processing pipeline to overcome the three problems above,
respectively. Although each of these problems has been studied actively, the
mixture problem of DM, DN, and SR, which is a higher practical value, lacks
enough attention. Such a mixture problem is usually solved by a sequential
solution (applying each method independently in a fixed order: DM DN
SR), or is simply tackled by an end-to-end network without enough
analysis into interactions among tasks, resulting in an undesired performance
drop in the final image quality. In this paper, we rethink the mixture problem
from a holistic perspective and propose a new image processing pipeline: DN
SR DM. Extensive experiments show that simply modifying the usual
sequential solution by leveraging our proposed pipeline could enhance the image
quality by a large margin. We further adopt the proposed pipeline into an
end-to-end network, and present Trinity Enhancement Network (TENet).
Quantitative and qualitative experiments demonstrate the superiority of our
TENet to the state-of-the-art. Besides, we notice the literature lacks a full
color sampled dataset. To this end, we contribute a new high-quality full color
sampled real-world dataset, namely PixelShift200. Our experiments show the
benefit of the proposed PixelShift200 dataset for raw image processing.Comment: Code is available at: https://github.com/guochengqian/TENe
Plug-and-Play Algorithms for Video Snapshot Compressive Imaging
We consider the reconstruction problem of video snapshot compressive imaging
(SCI), which captures high-speed videos using a low-speed 2D sensor (detector).
The underlying principle of SCI is to modulate sequential high-speed frames
with different masks and then these encoded frames are integrated into a
snapshot on the sensor and thus the sensor can be of low-speed. On one hand,
video SCI enjoys the advantages of low-bandwidth, low-power and low-cost. On
the other hand, applying SCI to large-scale problems (HD or UHD videos) in our
daily life is still challenging and one of the bottlenecks lies in the
reconstruction algorithm. Exiting algorithms are either too slow (iterative
optimization algorithms) or not flexible to the encoding process (deep learning
based end-to-end networks). In this paper, we develop fast and flexible
algorithms for SCI based on the plug-and-play (PnP) framework. In addition to
the PnP-ADMM method, we further propose the PnP-GAP (generalized alternating
projection) algorithm with a lower computational workload. We first employ the
image deep denoising priors to show that PnP can recover a UHD color video with
30 frames from a snapshot measurement. Since videos have strong temporal
correlation, by employing the video deep denoising priors, we achieve a
significant improvement in the results. Furthermore, we extend the proposed PnP
algorithms to the color SCI system using mosaic sensors, where each pixel only
captures the red, green or blue channels. A joint reconstruction and
demosaicing paradigm is developed for flexible and high quality reconstruction
of color video SCI systems. Extensive results on both simulation and real
datasets verify the superiority of our proposed algorithm.Comment: 18 pages, 12 figures and 4 tables. Journal extension of
arXiv:2003.13654. Code available at
https://github.com/liuyang12/PnP-SCI_pytho
Deep Mean-Shift Priors for Image Restoration
In this paper we introduce a natural image prior that directly represents a
Gaussian-smoothed version of the natural image distribution. We include our
prior in a formulation of image restoration as a Bayes estimator that also
allows us to solve noise-blind image restoration problems. We show that the
gradient of our prior corresponds to the mean-shift vector on the natural image
distribution. In addition, we learn the mean-shift vector field using denoising
autoencoders, and use it in a gradient descent approach to perform Bayes risk
minimization. We demonstrate competitive results for noise-blind deblurring,
super-resolution, and demosaicing.Comment: NIPS 201
Joint Demosaicking and Denoising in the Wild: The Case of Training Under Ground Truth Uncertainty
Image demosaicking and denoising are the two key fundamental steps in digital
camera pipelines, aiming to reconstruct clean color images from noisy luminance
readings. In this paper, we propose and study Wild-JDD, a novel learning
framework for joint demosaicking and denoising in the wild. In contrast to
previous works which generally assume the ground truth of training data is a
perfect reflection of the reality, we consider here the more common imperfect
case of ground truth uncertainty in the wild. We first illustrate its
manifestation as various kinds of artifacts including zipper effect, color
moire and residual noise. Then we formulate a two-stage data degradation
process to capture such ground truth uncertainty, where a conjugate prior
distribution is imposed upon a base distribution. After that, we derive an
evidence lower bound (ELBO) loss to train a neural network that approximates
the parameters of the conjugate prior distribution conditioned on the degraded
input. Finally, to further enhance the performance for out-of-distribution
input, we design a simple but effective fine-tuning strategy by taking the
input as a weakly informative prior. Taking into account ground truth
uncertainty, Wild-JDD enjoys good interpretability during optimization.
Extensive experiments validate that it outperforms state-of-the-art schemes on
joint demosaicking and denoising tasks on both synthetic and realistic raw
datasets.Comment: Accepted by AAAI202