15 research outputs found
Universal Demosaicking of Color Filter Arrays
A large number of color filter arrays (CFAs), periodic or aperiodic, have been proposed. To reconstruct images from all different CFAs and compare their imaging quality, a universal demosaicking method is needed. This paper proposes a new universal demosaicking method based on inter-pixel chrominance capture and optimal demosaicking transformation. It skips the commonly used step to estimate the luminance component at each pixel, and thus, avoids the associated estimation error. Instead, we directly use the acquired CFA color intensity at each pixel as an input component. Two independent chrominance components are estimated at each pixel based on the interpixel chrominance in the window, which is captured with the difference of CFA color values between the pixel of interest and its neighbors. Two mechanisms are employed for the accurate estimation: distance-related and edge-sensing weighting to reflect the confidence levels of the inter-pixel chrominance components, and pseudoinverse-based estimation from the components in a window. Then from the acquired CFA color component and two estimated chrominance components, the three primary colors are reconstructed by a linear color transform, which is optimized for the least transform error. Our experiments show that the proposed method is much better than other published universal demosaicking methods.National Key Basic Research Project of China (973 Program) [2015CB352303, 2011CB302400]; National Natural Science Foundation (NSF) of China [61071156, 61671027]SCI(E)[email protected]; [email protected]; [email protected]; [email protected]
Model-based demosaicking for acquisitions by a RGBW color filter array
Microsatellites and drones are often equipped with digital cameras whose
sensing system is based on color filter arrays (CFAs), which define a pattern
of color filter overlaid over the focal plane. Recent commercial cameras have
started implementing RGBW patterns, which include some filters with a wideband
spectral response together with the more classical RGB ones. This allows for
additional light energy to be captured by the relevant pixels and increases the
overall SNR of the acquisition. Demosaicking defines reconstructing a
multi-spectral image from the raw image and recovering the full color
components for all pixels. However, this operation is often tailored for the
most widespread patterns, such as the Bayer pattern. Consequently, less common
patterns that are still employed in commercial cameras are often neglected. In
this work, we present a generalized framework to represent the image formation
model of such cameras. This model is then exploited by our proposed
demosaicking algorithm to reconstruct the datacube of interest with a Bayesian
approach, using a total variation regularizer as prior. Some preliminary
experimental results are also presented, which apply to the reconstruction of
acquisitions of various RGBW cameras
Robust Joint Image Reconstruction from Color and Monochrome Cameras
International audienceRecent years have seen an explosion of the number of camera modules integratedinto individual consumer mobile devices, including configurations that contain multi-ple different types of image sensors. One popular configuration is to combine an RGBcamera for color imaging with a monochrome camera that has improved performancein low-light settings, as well as some sensitivity in the infrared. In this work we in-troduce a method to combine simultaneously captured images from such a two-camerastereo system to generate a high-quality, noise reduced color image. To do so, pixel-to-pixel alignment has to be constructed between the two captured monochrome and colorimages, which however, is prone to artifacts due to parallax. The joint image recon-struction is made robust by introducing a novel artifact-robust optimization formulation.We provide extensive experimental results based on the two-camera configuration of a commercially available cell phone
Efficient Unified Demosaicing for Bayer and Non-Bayer Patterned Image Sensors
As the physical size of recent CMOS image sensors (CIS) gets smaller, the
latest mobile cameras are adopting unique non-Bayer color filter array (CFA)
patterns (e.g., Quad, Nona, QxQ), which consist of homogeneous color units with
adjacent pixels. These non-Bayer sensors are superior to conventional Bayer CFA
thanks to their changeable pixel-bin sizes for different light conditions but
may introduce visual artifacts during demosaicing due to their inherent pixel
pattern structures and sensor hardware characteristics. Previous demosaicing
methods have primarily focused on Bayer CFA, necessitating distinct
reconstruction methods for non-Bayer patterned CIS with various CFA modes under
different lighting conditions. In this work, we propose an efficient unified
demosaicing method that can be applied to both conventional Bayer RAW and
various non-Bayer CFAs' RAW data in different operation modes. Our Knowledge
Learning-based demosaicing model for Adaptive Patterns, namely KLAP, utilizes
CFA-adaptive filters for only 1% key filters in the network for each CFA, but
still manages to effectively demosaic all the CFAs, yielding comparable
performance to the large-scale models. Furthermore, by employing meta-learning
during inference (KLAP-M), our model is able to eliminate unknown
sensor-generic artifacts in real RAW data, effectively bridging the gap between
synthetic images and real sensor RAW. Our KLAP and KLAP-M methods achieved
state-of-the-art demosaicing performance in both synthetic and real RAW data of
Bayer and non-Bayer CFAs
Recent Advances in Image Restoration with Applications to Real World Problems
In the past few decades, imaging hardware has improved tremendously in terms of resolution, making widespread usage of images in many diverse applications on Earth and planetary missions. However, practical issues associated with image acquisition are still affecting image quality. Some of these issues such as blurring, measurement noise, mosaicing artifacts, low spatial or spectral resolution, etc. can seriously affect the accuracy of the aforementioned applications. This book intends to provide the reader with a glimpse of the latest developments and recent advances in image restoration, which includes image super-resolution, image fusion to enhance spatial, spectral resolution, and temporal resolutions, and the generation of synthetic images using deep learning techniques. Some practical applications are also included
Joint demosaicing and fusion of multiresolution coded acquisitions: A unified image formation and reconstruction method
Novel optical imaging devices allow for hybrid acquisition modalities such as
compressed acquisitions with locally different spatial and spectral resolutions
captured by a single focal plane array. In this work, we propose to model the
capturing system of a multiresolution coded acquisition (MRCA) in a unified
framework, which natively includes conventional systems such as those based on
spectral/color filter arrays, compressed coded apertures, and multiresolution
sensing. We also propose a model-based image reconstruction algorithm
performing a joint demosaicing and fusion (JoDeFu) of any acquisition modeled
in the MRCA framework. The JoDeFu reconstruction algorithm solves an inverse
problem with a proximal splitting technique and is able to reconstruct an
uncompressed image datacube at the highest available spatial and spectral
resolution. An implementation of the code is available at
https://github.com/danaroth83/jodefu.Comment: 15 pages, 7 figures; regular pape