110 research outputs found
Rethinking the Pipeline of Demosaicing, Denoising and Super-Resolution
Incomplete color sampling, noise degradation, and limited resolution are the
three key problems that are unavoidable in modern camera systems. Demosaicing
(DM), denoising (DN), and super-resolution (SR) are core components in a
digital image processing pipeline to overcome the three problems above,
respectively. Although each of these problems has been studied actively, the
mixture problem of DM, DN, and SR, which is a higher practical value, lacks
enough attention. Such a mixture problem is usually solved by a sequential
solution (applying each method independently in a fixed order: DM DN
SR), or is simply tackled by an end-to-end network without enough
analysis into interactions among tasks, resulting in an undesired performance
drop in the final image quality. In this paper, we rethink the mixture problem
from a holistic perspective and propose a new image processing pipeline: DN
SR DM. Extensive experiments show that simply modifying the usual
sequential solution by leveraging our proposed pipeline could enhance the image
quality by a large margin. We further adopt the proposed pipeline into an
end-to-end network, and present Trinity Enhancement Network (TENet).
Quantitative and qualitative experiments demonstrate the superiority of our
TENet to the state-of-the-art. Besides, we notice the literature lacks a full
color sampled dataset. To this end, we contribute a new high-quality full color
sampled real-world dataset, namely PixelShift200. Our experiments show the
benefit of the proposed PixelShift200 dataset for raw image processing.Comment: Code is available at: https://github.com/guochengqian/TENe
Joint Demosaicking and Denoising in the Wild: The Case of Training Under Ground Truth Uncertainty
Image demosaicking and denoising are the two key fundamental steps in digital
camera pipelines, aiming to reconstruct clean color images from noisy luminance
readings. In this paper, we propose and study Wild-JDD, a novel learning
framework for joint demosaicking and denoising in the wild. In contrast to
previous works which generally assume the ground truth of training data is a
perfect reflection of the reality, we consider here the more common imperfect
case of ground truth uncertainty in the wild. We first illustrate its
manifestation as various kinds of artifacts including zipper effect, color
moire and residual noise. Then we formulate a two-stage data degradation
process to capture such ground truth uncertainty, where a conjugate prior
distribution is imposed upon a base distribution. After that, we derive an
evidence lower bound (ELBO) loss to train a neural network that approximates
the parameters of the conjugate prior distribution conditioned on the degraded
input. Finally, to further enhance the performance for out-of-distribution
input, we design a simple but effective fine-tuning strategy by taking the
input as a weakly informative prior. Taking into account ground truth
uncertainty, Wild-JDD enjoys good interpretability during optimization.
Extensive experiments validate that it outperforms state-of-the-art schemes on
joint demosaicking and denoising tasks on both synthetic and realistic raw
datasets.Comment: Accepted by AAAI202
Fully Trainable and Interpretable Non-Local Sparse Models for Image Restoration
Non-local self-similarity and sparsity principles have proven to be powerful
priors for natural image modeling. We propose a novel differentiable relaxation
of joint sparsity that exploits both principles and leads to a general
framework for image restoration which is (1) trainable end to end, (2) fully
interpretable, and (3) much more compact than competing deep learning
architectures. We apply this approach to denoising, jpeg deblocking, and
demosaicking, and show that, with as few as 100K parameters, its performance on
several standard benchmarks is on par or better than state-of-the-art methods
that may have an order of magnitude or more parameters.Comment: ECCV 202
Deep Structured Layers for Instance-Level Optimization in 2D and 3D Vision
The approach we present in this thesis is that of integrating optimization problems
as layers in deep neural networks. Optimization-based modeling provides an additional set of tools enabling the design of powerful neural networks for a wide
battery of computer vision tasks. This thesis shows formulations and experiments
for vision tasks ranging from image reconstruction to 3D reconstruction.
We first propose an unrolled optimization method with implicit regularization
properties for reconstructing images from noisy camera readings. The method resembles an unrolled majorization minimization framework with convolutional neural networks acting as regularizers. We report state-of-the-art performance in image
reconstruction on both noisy and noise-free evaluation setups across many datasets.
We further focus on the task of monocular 3D reconstruction of articulated objects using video self-supervision. The proposed method uses a structured layer for
accurate object deformation that controls a 3D surface by displacing a small number
of learnable handles. While relying on a small set of training data per category for
self-supervision, the method obtains state-of-the-art reconstruction accuracy with
diverse shapes and viewpoints for multiple articulated objects.
We finally address the shortcomings of the previous method that revolve
around regressing the camera pose using multiple hypotheses. We propose a method
that recovers a 3D shape from a 2D image by relying solely on 3D-2D correspondences regressed from a convolutional neural network. These correspondences are
used in conjunction with an optimization problem to estimate per sample the camera pose and deformation. We quantitatively show the effectiveness of the proposed
method on self-supervised 3D reconstruction on multiple categories without the need for multiple hypotheses
- …