12 research outputs found

    Removal of visual disruption caused by rain using cycle-consistent generative adversarial networks

    Get PDF
    This paper addresses the problem of removing rain disruption from images without blurring scene content, thereby retaining the visual quality of the image. This is particularly important in maintaining the performance of outdoor vision systems, which deteriorates with increasing rain disruption or degradation on the visual quality of the image. In this paper, the Cycle-Consistent Generative Adversarial Network (CycleGAN) is proposed as a more promising rain removal algorithm, as compared to the state-of-the-art Image De-raining Conditional Generative Adversarial Network (ID-CGAN). One of the main advantages of the CycleGAN is its ability to learn the underlying relationship between the rain and rain-free domain without the need of paired domain examples, which is essential for rain removal as it is not possible to obtain the rain-free image under dynamic outdoor conditions. Based on the physical properties and the various types of rain phenomena [10], five broad categories of real rain distortions are proposed, which can be applied to the majority of outdoor rain conditions. For a fair comparison, both the ID-CGAN and CycleGAN were trained on the same set of 700 synthesized rain-and-ground-truth image-pairs. Subsequently, both networks were tested on real rain images, which fall broadly under these five categories. A comparison of the performance between the CycleGAN and the ID-CGAN demonstrated that the CycleGAN is superior in removing real rain distortions

    Gradual Network for Single Image De-raining

    Full text link
    Most advances in single image de-raining meet a key challenge, which is removing rain streaks with different scales and shapes while preserving image details. Existing single image de-raining approaches treat rain-streak removal as a process of pixel-wise regression directly. However, they are lacking in mining the balance between over-de-raining (e.g. removing texture details in rain-free regions) and under-de-raining (e.g. leaving rain streaks). In this paper, we firstly propose a coarse-to-fine network called Gradual Network (GraNet) consisting of coarse stage and fine stage for delving into single image de-raining with different granularities. Specifically, to reveal coarse-grained rain-streak characteristics (e.g. long and thick rain streaks/raindrops), we propose a coarse stage by utilizing local-global spatial dependencies via a local-global subnetwork composed of region-aware blocks. Taking the residual result (the coarse de-rained result) between the rainy image sample (i.e. the input data) and the output of coarse stage (i.e. the learnt rain mask) as input, the fine stage continues to de-rain by removing the fine-grained rain streaks (e.g. light rain streaks and water mist) to get a rain-free and well-reconstructed output image via a unified contextual merging sub-network with dense blocks and a merging block. Solid and comprehensive experiments on synthetic and real data demonstrate that our GraNet can significantly outperform the state-of-the-art methods by removing rain streaks with various densities, scales and shapes while keeping the image details of rain-free regions well-preserved.Comment: In Proceedings of the 27th ACM International Conference on Multimedia (MM 2019

    Non-locally Enhanced Encoder-Decoder Network for Single Image De-raining

    Full text link
    Single image rain streaks removal has recently witnessed substantial progress due to the development of deep convolutional neural networks. However, existing deep learning based methods either focus on the entrance and exit of the network by decomposing the input image into high and low frequency information and employing residual learning to reduce the mapping range, or focus on the introduction of cascaded learning scheme to decompose the task of rain streaks removal into multi-stages. These methods treat the convolutional neural network as an encapsulated end-to-end mapping module without deepening into the rationality and superiority of neural network design. In this paper, we delve into an effective end-to-end neural network structure for stronger feature expression and spatial correlation learning. Specifically, we propose a non-locally enhanced encoder-decoder network framework, which consists of a pooling indices embedded encoder-decoder network to efficiently learn increasingly abstract feature representation for more accurate rain streaks modeling while perfectly preserving the image detail. The proposed encoder-decoder framework is composed of a series of non-locally enhanced dense blocks that are designed to not only fully exploit hierarchical features from all the convolutional layers but also well capture the long-distance dependencies and structural information. Extensive experiments on synthetic and real datasets demonstrate that the proposed method can effectively remove rain-streaks on rainy image of various densities while well preserving the image details, which achieves significant improvements over the recent state-of-the-art methods.Comment: Accepted to ACM Multimedia 201

    Rain Removal in Traffic Surveillance: Does it Matter?

    Get PDF
    Varying weather conditions, including rainfall and snowfall, are generally regarded as a challenge for computer vision algorithms. One proposed solution to the challenges induced by rain and snowfall is to artificially remove the rain from images or video using rain removal algorithms. It is the promise of these algorithms that the rain-removed image frames will improve the performance of subsequent segmentation and tracking algorithms. However, rain removal algorithms are typically evaluated on their ability to remove synthetic rain on a small subset of images. Currently, their behavior is unknown on real-world videos when integrated with a typical computer vision pipeline. In this paper, we review the existing rain removal algorithms and propose a new dataset that consists of 22 traffic surveillance sequences under a broad variety of weather conditions that all include either rain or snowfall. We propose a new evaluation protocol that evaluates the rain removal algorithms on their ability to improve the performance of subsequent segmentation, instance segmentation, and feature tracking algorithms under rain and snow. If successful, the de-rained frames of a rain removal algorithm should improve segmentation performance and increase the number of accurately tracked features. The results show that a recent single-frame-based rain removal algorithm increases the segmentation performance by 19.7% on our proposed dataset, but it eventually decreases the feature tracking performance and showed mixed results with recent instance segmentation methods. However, the best video-based rain removal algorithm improves the feature tracking accuracy by 7.72%.Comment: Published in IEEE Transactions on Intelligent Transportation System

    A Lightweight Network for Real-Time Rain Streaks and Rain Accumulation Removal from Single Images Captured by AVs

    Get PDF
    In autonomous driving, object detection is considered a base step to many subsequent processes. However, object detection is challenged by loss in visibility caused by rain. Rainfall occurs in two main forms, which are streaks and streaks accumulations. Each degradation type imposes different effect on the captured videos; therefore, they cannot be mitigated in the same way. We propose a lightweight network which mitigates both types of rain degradation in real-time, without negatively affecting the object-detection task. The proposed network consists of two different modules which are used progressively. The first one is a progressive ResNet for rain streaks removal, while the second one is a transmission-guided lightweight network for rain streak accumulation removal. The network has been tested on synthetic and real rainy datasets and has been compared with state-of-the-art (SOTA) networks. Additionally, time performance evaluation has been performed to ensure real-time performance. Finally, the effect of the developed deraining network has been tested on YOLO object-detection network. The proposed network exceeded SOTA by 1.12 dB in PSNR on the average result of multiple synthetic datasets with 2.29× speedup. Finally, it can be observed that the inclusion of different lightweight stages works favorably for real-time applications and could be updated to mitigate different degradation factors such as snow and sun blare
    corecore