1,020 research outputs found

    Deep Image Demosaicking using a Cascade of Convolutional Residual Denoising Networks

    Full text link
    Demosaicking and denoising are among the most crucial steps of modern digital camera pipelines and their joint treatment is a highly ill-posed inverse problem where at-least two-thirds of the information are missing and the rest are corrupted by noise. This poses a great challenge in obtaining meaningful reconstructions and a special care for the efficient treatment of the problem is required. While there are several machine learning approaches that have been recently introduced to deal with joint image demosaicking-denoising, in this work we propose a novel deep learning architecture which is inspired by powerful classical image regularization methods and large-scale convex optimization techniques. Consequently, our derived network is more transparent and has a clear interpretation compared to alternative competitive deep learning approaches. Our extensive experiments demonstrate that our network outperforms any previous approaches on both noisy and noise-free data. This improvement in reconstruction quality is attributed to the principled way we design our network architecture, which also requires fewer trainable parameters than the current state-of-the-art deep network solution. Finally, we show that our network has the ability to generalize well even when it is trained on small datasets, while keeping the overall number of trainable parameters low.Comment: Camera ready paper to appear in the Proceedings of ECCV 201

    Iterative Joint Image Demosaicking and Denoising using a Residual Denoising Network

    Full text link
    Modern digital cameras rely on the sequential execution of separate image processing steps to produce realistic images. The first two steps are usually related to denoising and demosaicking where the former aims to reduce noise from the sensor and the latter converts a series of light intensity readings to color images. Modern approaches try to jointly solve these problems, i.e. joint denoising-demosaicking which is an inherently ill-posed problem given that two-thirds of the intensity information is missing and the rest are perturbed by noise. While there are several machine learning systems that have been recently introduced to solve this problem, the majority of them relies on generic network architectures which do not explicitly take into account the physical image model. In this work we propose a novel algorithm which is inspired by powerful classical image regularization methods, large-scale optimization, and deep learning techniques. Consequently, our derived iterative optimization algorithm, which involves a trainable denoising network, has a transparent and clear interpretation compared to other black-box data driven approaches. Our extensive experimentation line demonstrates that our proposed method outperforms any previous approaches for both noisy and noise-free data across many different datasets. This improvement in reconstruction quality is attributed to the rigorous derivation of an iterative solution and the principled way we design our denoising network architecture, which as a result requires fewer trainable parameters than the current state-of-the-art solution and furthermore can be efficiently trained by using a significantly smaller number of training data than existing deep demosaicking networks. Code and results can be found at https://github.com/cig-skoltech/deep_demosaickComment: arXiv admin note: substantial text overlap with arXiv:1803.0521

    Generating Training Data for Denoising Real RGB Images via Camera Pipeline Simulation

    Full text link
    Image reconstruction techniques such as denoising often need to be applied to the RGB output of cameras and cellphones. Unfortunately, the commonly used additive white noise (AWGN) models do not accurately reproduce the noise and the degradation encountered on these inputs. This is particularly important for learning-based techniques, because the mismatch between training and real world data will hurt their generalization. This paper aims to accurately simulate the degradation and noise transformation performed by camera pipelines. This allows us to generate realistic degradation in RGB images that can be used to train machine learning models. We use our simulation to study the importance of noise modeling for learning-based denoising. Our study shows that a realistic noise model is required for learning to denoise real JPEG images. A neural network trained on realistic noise outperforms the one trained with AWGN by 3 dB. An ablation study of our pipeline shows that simulating denoising and demosaicking is important to this improvement and that realistic demosaicking algorithms, which have been rarely considered, is needed. We believe this simulation will also be useful for other image reconstruction tasks, and we will distribute our code publicly

    Joint optimization of multispectral filter arrays and demosaicking for pathological images

    Full text link
    A capturing system with multispectral filter array (MSFA) technology is proposed for shortening the capture time and reducing costs. Therein, a mosaicked image captured using an MSFA is demosaicked to reconstruct multispectral images (MSIs). Joint optimization of the spectral sensitivity of the MSFAs and demosaicking is considered, and pathology-specific multispectral imaging is proposed. This optimizes the MSFA and the demosaicking matrix by minimizing the reconstruction error between the training data of a hematoxylin and eosin-stained pathological tissue and a demosaicked MSI using a cost function. Initially, the spectral sensitivity of the filter array is set randomly and the mosaicked image is obtained from the training data. Subsequently, a reconstructed image is obtained using Wiener estimation. To minimize the reconstruction error, the spectral sensitivity of the filter array and the Wiener estimation matrix are optimized iteratively through an interior-point approach. The effectiveness of the proposed MSFA and demosaicking is demonstrated by comparing the recovered spectrum and RGB image with those obtained using a conventional method

    Joint Demosaicking and Denoising by Fine-Tuning of Bursts of Raw Images

    Full text link
    Demosaicking and denoising are the first steps of any camera image processing pipeline and are key for obtaining high quality RGB images. A promising current research trend aims at solving these two problems jointly using convolutional neural networks. Due to the unavailability of ground truth data these networks cannot be currently trained using real RAW images. Instead, they resort to simulated data. In this paper we present a method to learn demosaicking directly from mosaicked images, without requiring ground truth RGB data. We apply this to learn joint demosaicking and denoising only from RAW images, thus enabling the use of real data. In addition we show that for this application fine-tuning a network to a specific burst improves the quality of restoration for both demosaicking and denoising.Comment: ICCV 201

    Joint Defogging and Demosaicking

    Full text link
    Image defogging is a technique used extensively for enhancing visual quality of images in bad weather condition. Even though defogging algorithms have been well studied, defogging performance is degraded by demosaicking artifacts and sensor noise amplification in distant scenes. In order to improve visual quality of restored images, we propose a novel approach to perform defogging and demosaicking simultaneously. We conclude that better defogging performance with fewer artifacts can be achieved when a defogging algorithm is combined with a demosaicking algorithm simultaneously. We also demonstrate that the proposed joint algorithm has the benefit of suppressing noise amplification in distant scene. In addition, we validate our theoretical analysis and observations for both synthesized datasets with ground truth fog-free images and natural scene datasets captured in a raw format

    Loss Functions for Neural Networks for Image Processing

    Full text link
    Neural networks are becoming central in several areas of computer vision and image processing and different architectures have been proposed to solve specific problems. The impact of the loss layer of neural networks, however, has not received much attention in the context of image processing: the default and virtually only choice is L2. In this paper, we bring attention to alternative choices for image restoration. In particular, we show the importance of perceptually-motivated losses when the resulting image is to be evaluated by a human observer. We compare the performance of several losses, and propose a novel, differentiable error function. We show that the quality of the results improves significantly with better loss functions, even when the network architecture is left unchanged.Comment: This paper was published in IEEE Transactions on Computational Imaging on December 23, 201

    Color Filter Arrays for Quanta Image Sensors

    Full text link
    Quanta image sensor (QIS) is envisioned to be the next generation image sensor after CCD and CMOS. In this paper, we discuss how to design color filter arrays for QIS and other small pixels. Designing color filter arrays for small pixels is challenging because maximizing the light efficiency while suppressing aliasing and crosstalk are conflicting tasks. We present an optimization-based framework which unifies several mainstream color filter array design methodologies. Our method offers greater generality and flexibility. Compared to existing methods, the new framework can simultaneously handle luminance sensitivity, chrominance sensitivity, cross-talk, anti-aliasing, manufacturability and orthogonality. Extensive experimental comparisons demonstrate the effectiveness of the framework

    Light Weight Color Image Warping with Inter-Channel Information

    Full text link
    Image warping is a necessary step in many multimedia applications such as texture mapping, image-based rendering, panorama stitching, image resizing and optical flow computation etc. Traditionally, color image warping interpolation is performed in each color channel independently. In this paper, we show that the warping quality can be significantly enhanced by exploiting the cross-channel correlation. We design a warping scheme that integrates intra-channel interpolation with cross-channel variation at very low computational cost, which is required for interactive multimedia applications on mobile devices. The effectiveness and efficiency of our method are validated by extensive experiments

    Learning Deep Convolutional Networks for Demosaicing

    Full text link
    This paper presents a comprehensive study of applying the convolutional neural network (CNN) to solving the demosaicing problem. The paper presents two CNN models that learn end-to-end mappings between the mosaic samples and the original image patches with full information. In the case the Bayer color filter array (CFA) is used, an evaluation with ten competitive methods on popular benchmarks confirms that the data-driven, automatically learned features by the CNN models are very effective. Experiments show that the proposed CNN models can perform equally well in both the sRGB space and the linear space. It is also demonstrated that the CNN model can perform joint denoising and demosaicing. The CNN model is very flexible and can be easily adopted for demosaicing with any CFA design. We train CNN models for demosaicing with three different CFAs and obtain better results than existing methods. With the great flexibility to be coupled with any CFA, we present the first data-driven joint optimization of the CFA design and the demosaicing method using CNN. Experiments show that the combination of the automatically discovered CFA pattern and the automatically devised demosaicing method significantly outperforms the current best demosaicing results. Visual comparisons confirm that the proposed methods reduce more visual artifacts than existing methods. Finally, we show that the CNN model is also effective for the more general demosaicing problem with spatially varying exposure and color and can be used for taking images of higher dynamic ranges with a single shot. The proposed models and the thorough experiments together demonstrate that CNN is an effective and versatile tool for solving the demosaicing problem
    • …
    corecore