95 research outputs found

    The effect of the color filter array layout choice on state-of-the-art demosaicing

    Get PDF
    Interpolation from a Color Filter Array (CFA) is the most common method for obtaining full color image data. Its success relies on the smart combination of a CFA and a demosaicing algorithm. Demosaicing on the one hand has been extensively studied. Algorithmic development in the past 20 years ranges from simple linear interpolation to modern neural-network-based (NN) approaches that encode the prior knowledge of millions of training images to fill in missing data in an inconspicious way. CFA design, on the other hand, is less well studied, although still recognized to strongly impact demosaicing performance. This is because demosaicing algorithms are typically limited to one particular CFA pattern, impeding straightforward CFA comparison. This is starting to change with newer classes of demosaicing that may be considered generic or CFA-agnostic. In this study, by comparing performance of two state-of-the-art generic algorithms, we evaluate the potential of modern CFA-demosaicing. We test the hypothesis that, with the increasing power of NN-based demosaicing, the influence of optimal CFA design on system performance decreases. This hypothesis is supported with the experimental results. Such a finding would herald the possibility of relaxing CFA requirements, providing more freedom in the CFA design choice and producing high-quality cameras

    Efficient Unified Demosaicing for Bayer and Non-Bayer Patterned Image Sensors

    Full text link
    As the physical size of recent CMOS image sensors (CIS) gets smaller, the latest mobile cameras are adopting unique non-Bayer color filter array (CFA) patterns (e.g., Quad, Nona, QxQ), which consist of homogeneous color units with adjacent pixels. These non-Bayer sensors are superior to conventional Bayer CFA thanks to their changeable pixel-bin sizes for different light conditions but may introduce visual artifacts during demosaicing due to their inherent pixel pattern structures and sensor hardware characteristics. Previous demosaicing methods have primarily focused on Bayer CFA, necessitating distinct reconstruction methods for non-Bayer patterned CIS with various CFA modes under different lighting conditions. In this work, we propose an efficient unified demosaicing method that can be applied to both conventional Bayer RAW and various non-Bayer CFAs' RAW data in different operation modes. Our Knowledge Learning-based demosaicing model for Adaptive Patterns, namely KLAP, utilizes CFA-adaptive filters for only 1% key filters in the network for each CFA, but still manages to effectively demosaic all the CFAs, yielding comparable performance to the large-scale models. Furthermore, by employing meta-learning during inference (KLAP-M), our model is able to eliminate unknown sensor-generic artifacts in real RAW data, effectively bridging the gap between synthetic images and real sensor RAW. Our KLAP and KLAP-M methods achieved state-of-the-art demosaicing performance in both synthetic and real RAW data of Bayer and non-Bayer CFAs

    Multiresolution models in image restoration and reconstruction with medical and other applications

    Get PDF

    Universal Demosaicking of Color Filter Arrays

    Get PDF
    A large number of color filter arrays (CFAs), periodic or aperiodic, have been proposed. To reconstruct images from all different CFAs and compare their imaging quality, a universal demosaicking method is needed. This paper proposes a new universal demosaicking method based on inter-pixel chrominance capture and optimal demosaicking transformation. It skips the commonly used step to estimate the luminance component at each pixel, and thus, avoids the associated estimation error. Instead, we directly use the acquired CFA color intensity at each pixel as an input component. Two independent chrominance components are estimated at each pixel based on the interpixel chrominance in the window, which is captured with the difference of CFA color values between the pixel of interest and its neighbors. Two mechanisms are employed for the accurate estimation: distance-related and edge-sensing weighting to reflect the confidence levels of the inter-pixel chrominance components, and pseudoinverse-based estimation from the components in a window. Then from the acquired CFA color component and two estimated chrominance components, the three primary colors are reconstructed by a linear color transform, which is optimized for the least transform error. Our experiments show that the proposed method is much better than other published universal demosaicking methods.National Key Basic Research Project of China (973 Program) [2015CB352303, 2011CB302400]; National Natural Science Foundation (NSF) of China [61071156, 61671027]SCI(E)[email protected]; [email protected]; [email protected]; [email protected]

    Snapshot Multispectral Imaging Using a Diffractive Optical Network

    Full text link
    Multispectral imaging has been used for numerous applications in e.g., environmental monitoring, aerospace, defense, and biomedicine. Here, we present a diffractive optical network-based multispectral imaging system trained using deep learning to create a virtual spectral filter array at the output image field-of-view. This diffractive multispectral imager performs spatially-coherent imaging over a large spectrum, and at the same time, routes a pre-determined set of spectral channels onto an array of pixels at the output plane, converting a monochrome focal plane array or image sensor into a multispectral imaging device without any spectral filters or image recovery algorithms. Furthermore, the spectral responsivity of this diffractive multispectral imager is not sensitive to input polarization states. Through numerical simulations, we present different diffractive network designs that achieve snapshot multispectral imaging with 4, 9 and 16 unique spectral bands within the visible spectrum, based on passive spatially-structured diffractive surfaces, with a compact design that axially spans ~72 times the mean wavelength of the spectral band of interest. Moreover, we experimentally demonstrate a diffractive multispectral imager based on a 3D-printed diffractive network that creates at its output image plane a spatially-repeating virtual spectral filter array with 2x2=4 unique bands at terahertz spectrum. Due to their compact form factor and computation-free, power-efficient and polarization-insensitive forward operation, diffractive multispectral imagers can be transformative for various imaging and sensing applications and be used at different parts of the electromagnetic spectrum where high-density and wide-area multispectral pixel arrays are not widely available.Comment: 24 Pages, 9 Figure

    Multiresolution image models and estimation techniques

    Get PDF
    corecore