85 research outputs found

    Fast Single Image Rain Removal via a Deep Decomposition-Composition Network

    Full text link
    Rain effect in images typically is annoying for many multimedia and computer vision tasks. For removing rain effect from a single image, deep leaning techniques have been attracting considerable attentions. This paper designs a novel multi-task leaning architecture in an end-to-end manner to reduce the mapping range from input to output and boost the performance. Concretely, a decomposition net is built to split rain images into clean background and rain layers. Different from previous architectures, our model consists of, besides a component representing the desired clean image, an extra component for the rain layer. During the training phase, we further employ a composition structure to reproduce the input by the separated clean image and rain information for improving the quality of decomposition. Experimental results on both synthetic and real images are conducted to reveal the high-quality recovery by our design, and show its superiority over other state-of-the-art methods. Furthermore, our design is also applicable to other layer decomposition tasks like dust removal. More importantly, our method only requires about 50ms, significantly faster than the competitors, to process a testing image in VGA resolution on a GTX 1080 GPU, making it attractive for practical use

    Deep Retinex Decomposition for Low-Light Enhancement

    Full text link
    Retinex model is an effective tool for low-light image enhancement. It assumes that observed images can be decomposed into the reflectance and illumination. Most existing Retinex-based methods have carefully designed hand-crafted constraints and parameters for this highly ill-posed decomposition, which may be limited by model capacity when applied in various scenes. In this paper, we collect a LOw-Light dataset (LOL) containing low/normal-light image pairs and propose a deep Retinex-Net learned on this dataset, including a Decom-Net for decomposition and an Enhance-Net for illumination adjustment. In the training process for Decom-Net, there is no ground truth of decomposed reflectance and illumination. The network is learned with only key constraints including the consistent reflectance shared by paired low/normal-light images, and the smoothness of illumination. Based on the decomposition, subsequent lightness enhancement is conducted on illumination by an enhancement network called Enhance-Net, and for joint denoising there is a denoising operation on reflectance. The Retinex-Net is end-to-end trainable, so that the learned decomposition is by nature good for lightness adjustment. Extensive experiments demonstrate that our method not only achieves visually pleasing quality for low-light enhancement but also provides a good representation of image decomposition.Comment: BMVC 2018(Oral). Dataset and Project page: https://daooshee.github.io/BMVC2018website

    Rain O'er Me: Synthesizing real rain to derain with data distillation

    Full text link
    We present a supervised technique for learning to remove rain from images without using synthetic rain software. The method is based on a two-stage data distillation approach: 1) A rainy image is first paired with a coarsely derained version using on a simple filtering technique ("rain-to-clean"). 2) Then a clean image is randomly matched with the rainy soft-labeled pair. Through a shared deep neural network, the rain that is removed from the first image is then added to the clean image to generate a second pair ("clean-to-rain"). The neural network simultaneously learns to map both images such that high resolution structure in the clean images can inform the deraining of the rainy images. Demonstrations show that this approach can address those visual characteristics of rain not easily synthesized by software in the usual way

    Recurrent Squeeze-and-Excitation Context Aggregation Net for Single Image Deraining

    Full text link
    Rain streaks can severely degrade the visibility, which causes many current computer vision algorithms fail to work. So it is necessary to remove the rain from images. We propose a novel deep network architecture based on deep convolutional and recurrent neural networks for single image deraining. As contextual information is very important for rain removal, we first adopt the dilated convolutional neural network to acquire large receptive field. To better fit the rain removal task, we also modify the network. In heavy rain, rain streaks have various directions and shapes, which can be regarded as the accumulation of multiple rain streak layers. We assign different alpha-values to various rain streak layers according to the intensity and transparency by incorporating the squeeze-and-excitation block. Since rain streak layers overlap with each other, it is not easy to remove the rain in one stage. So we further decompose the rain removal into multiple stages. Recurrent neural network is incorporated to preserve the useful information in previous stages and benefit the rain removal in later stages. We conduct extensive experiments on both synthetic and real-world datasets. Our proposed method outperforms the state-of-the-art approaches under all evaluation metrics. Codes and supplementary material are available at our project webpage: https://xialipku.github.io/RESCAN .Comment: Accepted by ECC

    Image-to-Image Translation with Multi-Path Consistency Regularization

    Full text link
    Image translation across different domains has attracted much attention in both machine learning and computer vision communities. Taking the translation from source domain Ds\mathcal{D}_s to target domain Dt\mathcal{D}_t as an example, existing algorithms mainly rely on two kinds of loss for training: One is the discrimination loss, which is used to differentiate images generated by the models and natural images; the other is the reconstruction loss, which measures the difference between an original image and the reconstructed version through Ds→Dt→Ds\mathcal{D}_s\to\mathcal{D}_t\to\mathcal{D}_s translation. In this work, we introduce a new kind of loss, multi-path consistency loss, which evaluates the differences between direct translation Ds→Dt\mathcal{D}_s\to\mathcal{D}_t and indirect translation Ds→Da→Dt\mathcal{D}_s\to\mathcal{D}_a\to\mathcal{D}_t with Da\mathcal{D}_a as an auxiliary domain, to regularize training. For multi-domain translation (at least, three) which focuses on building translation models between any two domains, at each training iteration, we randomly select three domains, set them respectively as the source, auxiliary and target domains, build the multi-path consistency loss and optimize the network. For two-domain translation, we need to introduce an additional auxiliary domain and construct the multi-path consistency loss. We conduct various experiments to demonstrate the effectiveness of our proposed methods, including face-to-face translation, paint-to-photo translation, and de-raining/de-noising translation.Comment: 8 pages, 6 figures. Accepted by the 28th International Joint Conference on Artificial Intelligence (IJCAI-2019

    Attentive Generative Adversarial Network for Raindrop Removal from a Single Image

    Full text link
    Raindrops adhered to a glass window or camera lens can severely hamper the visibility of a background scene and degrade an image considerably. In this paper, we address the problem by visually removing raindrops, and thus transforming a raindrop degraded image into a clean one. The problem is intractable, since first the regions occluded by raindrops are not given. Second, the information about the background scene of the occluded regions is completely lost for most part. To resolve the problem, we apply an attentive generative network using adversarial training. Our main idea is to inject visual attention into both the generative and discriminative networks. During the training, our visual attention learns about raindrop regions and their surroundings. Hence, by injecting this information, the generative network will pay more attention to the raindrop regions and the surrounding structures, and the discriminative network will be able to assess the local consistency of the restored regions. This injection of visual attention to both generative and discriminative networks is the main contribution of this paper. Our experiments show the effectiveness of our approach, which outperforms the state of the art methods quantitatively and qualitatively.Comment: CVPR2018 Spotligh

    Night Time Haze and Glow Removal using Deep Dilated Convolutional Network

    Full text link
    In this paper, we address the single image haze removal problem in a nighttime scene. The night haze removal is a severely ill-posed problem especially due to the presence of various visible light sources with varying colors and non-uniform illumination. These light sources are of different shapes and introduce noticeable glow in night scenes. To address these effects we introduce a deep learning based DeGlow-DeHaze iterative architecture which accounts for varying color illumination and glows. First, our convolution neural network (CNN) based DeGlow model is able to remove the glow effect significantly and on top of it a separate DeHaze network is included to remove the haze effect. For our recurrent network training, the hazy images and the corresponding transmission maps are synthesized from the NYU depth datasets and consequently restored a high-quality haze-free image. The experimental results demonstrate that our hybrid CNN model outperforms other state-of-the-art methods in terms of computation speed and image quality. We also show the effectiveness of our model on a number of real images and compare our results with the existing night haze heuristic models.Comment: 13 pages, 10 figures, 2 Table

    An Effective Two-Branch Model-Based Deep Network for Single Image Deraining

    Full text link
    Removing rain effects from an image is of importance for various applications such as autonomous driving, drone piloting, and photo editing. Conventional methods rely on some heuristics to handcraft various priors to remove or separate the rain effects from an image. Recent deep learning models are proposed to learn end-to-end methods to complete this task. However, they often fail to obtain satisfactory results in many realistic scenarios, especially when the observed images suffer from heavy rain. Heavy rain brings not only rain streaks but also haze-like effect caused by the accumulation of tiny raindrops. Different from the existing deep learning deraining methods that mainly focus on handling the rain streaks, we design a deep neural network by incorporating a physical raining image model. Specifically, in the proposed model, two branches are designed to handle both the rain streaks and haze-like effects. An additional submodule is jointly trained to finally refine the results, which give the model flexibility to control the strength of removing the mist. Extensive experiments on several datasets show that our method outperforms the state-of-the-art in both objective assessments and visual quality.Comment: 10 pages, 9 figures, 3 table

    Spatial Attentive Single-Image Deraining with a High Quality Real Rain Dataset

    Full text link
    Removing rain streaks from a single image has been drawing considerable attention as rain streaks can severely degrade the image quality and affect the performance of existing outdoor vision tasks. While recent CNN-based derainers have reported promising performances, deraining remains an open problem for two reasons. First, existing synthesized rain datasets have only limited realism, in terms of modeling real rain characteristics such as rain shape, direction and intensity. Second, there are no public benchmarks for quantitative comparisons on real rain images, which makes the current evaluation less objective. The core challenge is that real world rain/clean image pairs cannot be captured at the same time. In this paper, we address the single image rain removal problem in two ways. First, we propose a semi-automatic method that incorporates temporal priors and human supervision to generate a high-quality clean image from each input sequence of real rain images. Using this method, we construct a large-scale dataset of ∼\sim29.5K29.5K rain/rain-free image pairs that covers a wide range of natural rain scenes. Second, to better cover the stochastic distribution of real rain streaks, we propose a novel SPatial Attentive Network (SPANet) to remove rain streaks in a local-to-global manner. Extensive experiments demonstrate that our network performs favorably against the state-of-the-art deraining methods.Comment: Accepted by CVPR'19. Project page: https://stevewongv.github.io/derain-project.htm

    Structural Residual Learning for Single Image Rain Removal

    Full text link
    To alleviate the adverse effect of rain streaks in image processing tasks, CNN-based single image rain removal methods have been recently proposed. However, the performance of these deep learning methods largely relies on the covering range of rain shapes contained in the pre-collected training rainy-clean image pairs. This makes them easily trapped into the overfitting-to-the-training-samples issue and cannot finely generalize to practical rainy images with complex and diverse rain streaks. Against this generalization issue, this study proposes a new network architecture by enforcing the output residual of the network possess intrinsic rain structures. Such a structural residual setting guarantees the rain layer extracted by the network finely comply with the prior knowledge of general rain streaks, and thus regulates sound rain shapes capable of being well extracted from rainy images in both training and predicting stages. Such a general regularization function naturally leads to both its better training accuracy and testing generalization capability even for those non-seen rain configurations. Such superiority is comprehensively substantiated by experiments implemented on synthetic and real datasets both visually and quantitatively as compared with current state-of-the-art methods
    • …
    corecore