182 research outputs found

    Joint Demosaicing and Denoising with Double Deep Image Priors

    Full text link
    Demosaicing and denoising of RAW images are crucial steps in the processing pipeline of modern digital cameras. As only a third of the color information required to produce a digital image is captured by the camera sensor, the process of demosaicing is inherently ill-posed. The presence of noise further exacerbates this problem. Performing these two steps sequentially may distort the content of the captured RAW images and accumulate errors from one step to another. Recent deep neural-network-based approaches have shown the effectiveness of joint demosaicing and denoising to mitigate such challenges. However, these methods typically require a large number of training samples and do not generalize well to different types and intensities of noise. In this paper, we propose a novel joint demosaicing and denoising method, dubbed JDD-DoubleDIP, which operates directly on a single RAW image without requiring any training data. We validate the effectiveness of our method on two popular datasets -- Kodak and McMaster -- with various noises and noise intensities. The experimental results show that our method consistently outperforms other compared methods in terms of PSNR, SSIM, and qualitative visual perception

    Noise Power Spectrum Scene-Dependency in Simulated Image Capture Systems

    Get PDF
    The Noise Power Spectrum (NPS) is a standard measure for image capture system noise. It is derived traditionally from captured uniform luminance patches that are unrepresentative of pictorial scene signals. Many contemporary capture systems apply non- linear content-aware signal processing, which renders their noise scene-dependent. For scene-dependent systems, measuring the NPS with respect to uniform patch signals fails to characterize with accuracy: i) system noise concerning a given input scene, ii) the average system noise power in real-world applications. The scene- and-process-dependent NPS (SPD-NPS) framework addresses these limitations by measuring temporally varying system noise with respect to any given input signal. In this paper, we examine the scene-dependency of simulated camera pipelines in-depth by deriving SPD-NPSs from fifty test scenes. The pipelines apply either linear or non-linear denoising and sharpening, tuned to optimize output image quality at various opacity levels and exposures. Further, we present the integrated area under the mean of SPD-NPS curves over a representative scene set as an objective system noise metric, and their relative standard deviation area (RSDA) as a metric for system noise scene-dependency. We close by discussing how these metrics can also be computed using scene-and-process- dependent Modulation Transfer Functions (SPD-MTF)

    Efficient Unified Demosaicing for Bayer and Non-Bayer Patterned Image Sensors

    Full text link
    As the physical size of recent CMOS image sensors (CIS) gets smaller, the latest mobile cameras are adopting unique non-Bayer color filter array (CFA) patterns (e.g., Quad, Nona, QxQ), which consist of homogeneous color units with adjacent pixels. These non-Bayer sensors are superior to conventional Bayer CFA thanks to their changeable pixel-bin sizes for different light conditions but may introduce visual artifacts during demosaicing due to their inherent pixel pattern structures and sensor hardware characteristics. Previous demosaicing methods have primarily focused on Bayer CFA, necessitating distinct reconstruction methods for non-Bayer patterned CIS with various CFA modes under different lighting conditions. In this work, we propose an efficient unified demosaicing method that can be applied to both conventional Bayer RAW and various non-Bayer CFAs' RAW data in different operation modes. Our Knowledge Learning-based demosaicing model for Adaptive Patterns, namely KLAP, utilizes CFA-adaptive filters for only 1% key filters in the network for each CFA, but still manages to effectively demosaic all the CFAs, yielding comparable performance to the large-scale models. Furthermore, by employing meta-learning during inference (KLAP-M), our model is able to eliminate unknown sensor-generic artifacts in real RAW data, effectively bridging the gap between synthetic images and real sensor RAW. Our KLAP and KLAP-M methods achieved state-of-the-art demosaicing performance in both synthetic and real RAW data of Bayer and non-Bayer CFAs

    High-ISO long-exposure image denoising based on quantitative blob characterization

    Get PDF
    Blob detection and image denoising are fundamental, sometimes related tasks in computer vision. In this paper, we present a computational method to quantitatively measure blob characteristics using normalized unilateral second-order Gaussian kernels. This method suppresses non-blob structures while yielding a quantitative measurement of the position, prominence and scale of blobs, which can facilitate the tasks of blob reconstruction and blob reduction. Subsequently, we propose a denoising scheme to address high-ISO long-exposure noise, which sometimes spatially shows a blob appearance, employing a blob reduction procedure as a cheap preprocessing for conventional denoising methods. We apply the proposed denoising methods to real-world noisy images as well as standard images that are corrupted by real noise. The experimental results demonstrate the superiority of the proposed methods over state-of-the-art denoising methods

    Multiresolution image models and estimation techniques

    Get PDF
    corecore