57 research outputs found
Color reproduction from noisy CFA data of single sensor digital cameras
2007-2008 > Academic research: refereed > Publication in refereed journalVersion of RecordPublishe
Color Filter Array Image Analysis for Joint Denoising and Demosaicking
Noise is among the worst artifacts that affect the perceptual quality of the output from a digital camera. While cost-effective and popular, single-sensor solutions to camera architectures are not adept at noise suppression. In this scheme, data are typically obtained via a spatial subsampling procedure implemented as a color filter array (CFA), a physical construction whereby each pixel location measures the intensity of the light corresponding to only a single color. Aside from undersampling, observations made under noisy conditions typically deteriorate the estimates of the full-color image in the reconstruction process commonly referred to as demosaicking or CFA interpolation in the literature. A typical CFA scheme involves the canonical color triples (i.e., red, green, blue), and the most prevalent arrangement is called Bayer pattern.
As the general trend of increased image resolution continues due to prevalence of multimedia, the importance of interpolation is de-emphasized while the concerns for computational efficiency, noise, and color fidelity play an increasingly prominent role in the decision making of a digital camera architect. For instance, the interpolation artifacts become less noticeable as the size of the pixel shrinks with respect to the image features, while the decreased dimensionality of the pixel sensors on the complementary metal oxide semiconductor (CMOS) and charge coupled device (CCD) sensors make the pixels more susceptible to noise. Photon-limited influences are also evident in low-light photography, ranging from a specialty camera for precision measurement to indoor consumer photography.
Sensor data, which can be interpreted as subsampled or incomplete image data, undergo a series of image processing procedures in order to produce a digital photograph. However, these same steps may amplify noise introduced during image acquisition. Specifically, the demosaicking step is a major source of conflict between the image processing pipeline and image sensor noise characterization because the interpolation methods give high priority to preserving the sharpness of edges and textures.
In the presence of noise, noise patterns may form false edge structures; therefore, the distortions at the output are typically correlated with the signal in a complicated manner that makes noise modelling mathematically intractable. Thus, it is natural to conceive of a rigorous tradeoff between demosaicking and image denoising
Joint Demosaicking and Denoising in the Wild: The Case of Training Under Ground Truth Uncertainty
Image demosaicking and denoising are the two key fundamental steps in digital
camera pipelines, aiming to reconstruct clean color images from noisy luminance
readings. In this paper, we propose and study Wild-JDD, a novel learning
framework for joint demosaicking and denoising in the wild. In contrast to
previous works which generally assume the ground truth of training data is a
perfect reflection of the reality, we consider here the more common imperfect
case of ground truth uncertainty in the wild. We first illustrate its
manifestation as various kinds of artifacts including zipper effect, color
moire and residual noise. Then we formulate a two-stage data degradation
process to capture such ground truth uncertainty, where a conjugate prior
distribution is imposed upon a base distribution. After that, we derive an
evidence lower bound (ELBO) loss to train a neural network that approximates
the parameters of the conjugate prior distribution conditioned on the degraded
input. Finally, to further enhance the performance for out-of-distribution
input, we design a simple but effective fine-tuning strategy by taking the
input as a weakly informative prior. Taking into account ground truth
uncertainty, Wild-JDD enjoys good interpretability during optimization.
Extensive experiments validate that it outperforms state-of-the-art schemes on
joint demosaicking and denoising tasks on both synthetic and realistic raw
datasets.Comment: Accepted by AAAI202
Inheriting Bayer's Legacy-Joint Remosaicing and Denoising for Quad Bayer Image Sensor
Pixel binning based Quad sensors have emerged as a promising solution to
overcome the hardware limitations of compact cameras in low-light imaging.
However, binning results in lower spatial resolution and non-Bayer CFA
artifacts. To address these challenges, we propose a dual-head joint
remosaicing and denoising network (DJRD), which enables the conversion of noisy
Quad Bayer and standard noise-free Bayer pattern without any resolution loss.
DJRD includes a newly designed Quad Bayer remosaicing (QB-Re) block, integrated
denoising modules based on Swin-transformer and multi-scale wavelet transform.
The QB-Re block constructs the convolution kernel based on the CFA pattern to
achieve a periodic color distribution in the perceptual field, which is used to
extract exact spectral information and reduce color misalignment. The
integrated Swin-Transformer and multi-scale wavelet transform capture non-local
dependencies, frequency and location information to effectively reduce
practical noise. By identifying challenging patches utilizing Moire and zipper
detection metrics, we enable our model to concentrate on difficult patches
during the post-training phase, which enhances the model's performance in hard
cases. Our proposed model outperforms competing models by approximately 3dB,
without additional complexity in hardware or software
- …