4,009 research outputs found

    Rain Removal in Traffic Surveillance: Does it Matter?

    Get PDF
    Varying weather conditions, including rainfall and snowfall, are generally regarded as a challenge for computer vision algorithms. One proposed solution to the challenges induced by rain and snowfall is to artificially remove the rain from images or video using rain removal algorithms. It is the promise of these algorithms that the rain-removed image frames will improve the performance of subsequent segmentation and tracking algorithms. However, rain removal algorithms are typically evaluated on their ability to remove synthetic rain on a small subset of images. Currently, their behavior is unknown on real-world videos when integrated with a typical computer vision pipeline. In this paper, we review the existing rain removal algorithms and propose a new dataset that consists of 22 traffic surveillance sequences under a broad variety of weather conditions that all include either rain or snowfall. We propose a new evaluation protocol that evaluates the rain removal algorithms on their ability to improve the performance of subsequent segmentation, instance segmentation, and feature tracking algorithms under rain and snow. If successful, the de-rained frames of a rain removal algorithm should improve segmentation performance and increase the number of accurately tracked features. The results show that a recent single-frame-based rain removal algorithm increases the segmentation performance by 19.7% on our proposed dataset, but it eventually decreases the feature tracking performance and showed mixed results with recent instance segmentation methods. However, the best video-based rain removal algorithm improves the feature tracking accuracy by 7.72%.Comment: Published in IEEE Transactions on Intelligent Transportation System

    Video Deraining Mechanism by Preserving the Temporal Consistency and Intrinsic Properties of Rain Streaks

    Get PDF
    The problem of video deraining is one of the most focused research areas where several techniques are introduced to achieve higher visual quality in the results. The process of video deraining tries to eliminate the presence of rain streaks in the videos so that the overall quality of the video can be enhanced. The existing frameworks tried to accurately eliminate the rain streaks in videos, but there is still room for improvement to preserve the temporal consistencies and intrinsic properties of rain streaks. This work does the job with the combination of handcrafted and deep priors that deeply extract and identify the nature of rain streaks in different dimensions. The proposed work included three main steps: Prior extraction, Derain modelling and optimization. Four major priors are extracted from the frames where the gradient prior (GP) and sparse prior (SP) are extracted from the rain streaks, and the smooth temporal prior (STP) and deep prior (DP) are extracted from the clean video. The unidirectional total variation (UTV) is applied to extract the GP, and the L1 normalization method is followed to extract the SP and STP. The DP is then extracted from the clean frames using the residual gated recurrent deraining network (Res-GRRN) model based on deep learning. Derain modelling is carried out based on the extracted priors, and the stochastic alternating direction multiplier method (SADMM) algorithm is utilized to solve the optimization problem. The proposed approach is then implemented in python and evaluated using the real-world dataset. The overall PSNR achieved by the proposed approach is 39.193dB, which is more optimal than the existing methods

    From Sky to the Ground: A Large-scale Benchmark and Simple Baseline Towards Real Rain Removal

    Full text link
    Learning-based image deraining methods have made great progress. However, the lack of large-scale high-quality paired training samples is the main bottleneck to hamper the real image deraining (RID). To address this dilemma and advance RID, we construct a Large-scale High-quality Paired real rain benchmark (LHP-Rain), including 3000 video sequences with 1 million high-resolution (1920*1080) frame pairs. The advantages of the proposed dataset over the existing ones are three-fold: rain with higher-diversity and larger-scale, image with higher-resolution and higher-quality ground-truth. Specifically, the real rains in LHP-Rain not only contain the classical rain streak/veiling/occlusion in the sky, but also the \textbf{splashing on the ground} overlooked by deraining community. Moreover, we propose a novel robust low-rank tensor recovery model to generate the GT with better separating the static background from the dynamic rain. In addition, we design a simple transformer-based single image deraining baseline, which simultaneously utilize the self-attention and cross-layer attention within the image and rain layer with discriminative feature representation. Extensive experiments verify the superiority of the proposed dataset and deraining method over state-of-the-art.Comment: Accepted by ICCV 202
    • …
    corecore