21 research outputs found

    Quantum image rain removal: second-order photon number fluctuation correlations in the time domain

    Full text link
    Falling raindrops are usually considered purely negative factors for traditional optical imaging because they generate not only rain streaks but also rain fog, resulting in a decrease in the visual quality of images. However, this work demonstrates that the image degradation caused by falling raindrops can be eliminated by the raindrops themselves. The temporal second-order correlation properties of the photon number fluctuation introduced by falling raindrops has a remarkable attribute: the rain streak photons and rain fog photons result in the absence of a stable second-order photon number correlation, while this stable correlation exists for photons that do not interact with raindrops. This fundamental difference indicates that the noise caused by falling raindrops can be eliminated by measuring the second-order photon number fluctuation correlation in the time domain. The simulation and experimental results demonstrate that the rain removal effect of this method is even better than that of deep learning methods when the integration time of each measurement event is short. This high-efficient quantum rain removal method can be used independently or integrated into deep learning algorithms to provide front-end processing and high-quality materials for deep learning.Comment: 5 pages, 7 figure

    Improving Lens Flare Removal with General Purpose Pipeline and Multiple Light Sources Recovery

    Full text link
    When taking images against strong light sources, the resulting images often contain heterogeneous flare artifacts. These artifacts can importantly affect image visual quality and downstream computer vision tasks. While collecting real data pairs of flare-corrupted/flare-free images for training flare removal models is challenging, current methods utilize the direct-add approach to synthesize data. However, these methods do not consider automatic exposure and tone mapping in image signal processing pipeline (ISP), leading to the limited generalization capability of deep models training using such data. Besides, existing methods struggle to handle multiple light sources due to the different sizes, shapes and illuminance of various light sources. In this paper, we propose a solution to improve the performance of lens flare removal by revisiting the ISP and remodeling the principle of automatic exposure in the synthesis pipeline and design a more reliable light sources recovery strategy. The new pipeline approaches realistic imaging by discriminating the local and global illumination through convex combination, avoiding global illumination shifting and local over-saturation. Our strategy for recovering multiple light sources convexly averages the input and output of the neural network based on illuminance levels, thereby avoiding the need for a hard threshold in identifying light sources. We also contribute a new flare removal testing dataset containing the flare-corrupted images captured by ten types of consumer electronics. The dataset facilitates the verification of the generalization capability of flare removal methods. Extensive experiments show that our solution can effectively improve the performance of lens flare removal and push the frontier toward more general situations.Comment: ICCV 202

    From Sky to the Ground: A Large-scale Benchmark and Simple Baseline Towards Real Rain Removal

    Full text link
    Learning-based image deraining methods have made great progress. However, the lack of large-scale high-quality paired training samples is the main bottleneck to hamper the real image deraining (RID). To address this dilemma and advance RID, we construct a Large-scale High-quality Paired real rain benchmark (LHP-Rain), including 3000 video sequences with 1 million high-resolution (1920*1080) frame pairs. The advantages of the proposed dataset over the existing ones are three-fold: rain with higher-diversity and larger-scale, image with higher-resolution and higher-quality ground-truth. Specifically, the real rains in LHP-Rain not only contain the classical rain streak/veiling/occlusion in the sky, but also the \textbf{splashing on the ground} overlooked by deraining community. Moreover, we propose a novel robust low-rank tensor recovery model to generate the GT with better separating the static background from the dynamic rain. In addition, we design a simple transformer-based single image deraining baseline, which simultaneously utilize the self-attention and cross-layer attention within the image and rain layer with discriminative feature representation. Extensive experiments verify the superiority of the proposed dataset and deraining method over state-of-the-art.Comment: Accepted by ICCV 202

    MARA-Net: Single Image Deraining Network with Multi-level connections and Adaptive Regional Attentions

    Full text link
    Removing rain streaks from single images is an important problem in various computer vision tasks because rain streaks can degrade outdoor images and reduce their visibility. While recent convolutional neural network-based deraining models have succeeded in capturing rain streaks effectively, difficulties in recovering the details in rain-free images still remain. In this paper, we present a multi-level connection and adaptive regional attention network (MARA-Net) to properly restore the original background textures in rainy images. The first main idea is a multi-level connection design that repeatedly connects multi-level features of the encoder network to the decoder network. Multi-level connections encourage the decoding process to use the feature information of all levels. Channel attention is considered in multi-level connections to learn which level of features is important in the decoding process of the current level. The second main idea is a wide regional non-local block (WRNL). As rain streaks primarily exhibit a vertical distribution, we divide the grid of the image into horizontally-wide patches and apply a non-local operation to each region to explore the rich rain-free background information. Experimental results on both synthetic and real-world rainy datasets demonstrate that the proposed model significantly outperforms existing state-of-the-art models. Furthermore, the results of the joint deraining and segmentation experiment prove that our model contributes effectively to other vision tasks

    Counting Crowds in Bad Weather

    Full text link
    Crowd counting has recently attracted significant attention in the field of computer vision due to its wide applications to image understanding. Numerous methods have been proposed and achieved state-of-the-art performance for real-world tasks. However, existing approaches do not perform well under adverse weather such as haze, rain, and snow since the visual appearances of crowds in such scenes are drastically different from those images in clear weather of typical datasets. In this paper, we propose a method for robust crowd counting in adverse weather scenarios. Instead of using a two-stage approach that involves image restoration and crowd counting modules, our model learns effective features and adaptive queries to account for large appearance variations. With these weather queries, the proposed model can learn the weather information according to the degradation of the input image and optimize with the crowd counting module simultaneously. Experimental results show that the proposed algorithm is effective in counting crowds under different weather types on benchmark datasets. The source code and trained models will be made available to the public.Comment: including supplemental materia

    Video Deraining Mechanism by Preserving the Temporal Consistency and Intrinsic Properties of Rain Streaks

    Get PDF
    The problem of video deraining is one of the most focused research areas where several techniques are introduced to achieve higher visual quality in the results. The process of video deraining tries to eliminate the presence of rain streaks in the videos so that the overall quality of the video can be enhanced. The existing frameworks tried to accurately eliminate the rain streaks in videos, but there is still room for improvement to preserve the temporal consistencies and intrinsic properties of rain streaks. This work does the job with the combination of handcrafted and deep priors that deeply extract and identify the nature of rain streaks in different dimensions. The proposed work included three main steps: Prior extraction, Derain modelling and optimization. Four major priors are extracted from the frames where the gradient prior (GP) and sparse prior (SP) are extracted from the rain streaks, and the smooth temporal prior (STP) and deep prior (DP) are extracted from the clean video. The unidirectional total variation (UTV) is applied to extract the GP, and the L1 normalization method is followed to extract the SP and STP. The DP is then extracted from the clean frames using the residual gated recurrent deraining network (Res-GRRN) model based on deep learning. Derain modelling is carried out based on the extracted priors, and the stochastic alternating direction multiplier method (SADMM) algorithm is utilized to solve the optimization problem. The proposed approach is then implemented in python and evaluated using the real-world dataset. The overall PSNR achieved by the proposed approach is 39.193dB, which is more optimal than the existing methods
    corecore