444 research outputs found

    Removal of visual disruption caused by rain using cycle-consistent generative adversarial networks

    Get PDF
    This paper addresses the problem of removing rain disruption from images without blurring scene content, thereby retaining the visual quality of the image. This is particularly important in maintaining the performance of outdoor vision systems, which deteriorates with increasing rain disruption or degradation on the visual quality of the image. In this paper, the Cycle-Consistent Generative Adversarial Network (CycleGAN) is proposed as a more promising rain removal algorithm, as compared to the state-of-the-art Image De-raining Conditional Generative Adversarial Network (ID-CGAN). One of the main advantages of the CycleGAN is its ability to learn the underlying relationship between the rain and rain-free domain without the need of paired domain examples, which is essential for rain removal as it is not possible to obtain the rain-free image under dynamic outdoor conditions. Based on the physical properties and the various types of rain phenomena [10], five broad categories of real rain distortions are proposed, which can be applied to the majority of outdoor rain conditions. For a fair comparison, both the ID-CGAN and CycleGAN were trained on the same set of 700 synthesized rain-and-ground-truth image-pairs. Subsequently, both networks were tested on real rain images, which fall broadly under these five categories. A comparison of the performance between the CycleGAN and the ID-CGAN demonstrated that the CycleGAN is superior in removing real rain distortions

    Single-image snow removal based on an attention mechanism and a generative adversarial network

    Get PDF

    Rain Removal in Traffic Surveillance: Does it Matter?

    Get PDF
    Varying weather conditions, including rainfall and snowfall, are generally regarded as a challenge for computer vision algorithms. One proposed solution to the challenges induced by rain and snowfall is to artificially remove the rain from images or video using rain removal algorithms. It is the promise of these algorithms that the rain-removed image frames will improve the performance of subsequent segmentation and tracking algorithms. However, rain removal algorithms are typically evaluated on their ability to remove synthetic rain on a small subset of images. Currently, their behavior is unknown on real-world videos when integrated with a typical computer vision pipeline. In this paper, we review the existing rain removal algorithms and propose a new dataset that consists of 22 traffic surveillance sequences under a broad variety of weather conditions that all include either rain or snowfall. We propose a new evaluation protocol that evaluates the rain removal algorithms on their ability to improve the performance of subsequent segmentation, instance segmentation, and feature tracking algorithms under rain and snow. If successful, the de-rained frames of a rain removal algorithm should improve segmentation performance and increase the number of accurately tracked features. The results show that a recent single-frame-based rain removal algorithm increases the segmentation performance by 19.7% on our proposed dataset, but it eventually decreases the feature tracking performance and showed mixed results with recent instance segmentation methods. However, the best video-based rain removal algorithm improves the feature tracking accuracy by 7.72%.Comment: Published in IEEE Transactions on Intelligent Transportation System

    Model-based occlusion disentanglement for image-to-image translation

    Full text link
    Image-to-image translation is affected by entanglement phenomena, which may occur in case of target data encompassing occlusions such as raindrops, dirt, etc. Our unsupervised model-based learning disentangles scene and occlusions, while benefiting from an adversarial pipeline to regress physical parameters of the occlusion model. The experiments demonstrate our method is able to handle varying types of occlusions and generate highly realistic translations, qualitatively and quantitatively outperforming the state-of-the-art on multiple datasets.Comment: ECCV 202
    • …
    corecore