110 research outputs found

    Model Adaptation with Synthetic and Real Data for Semantic Dense Foggy Scene Understanding

    Full text link
    This work addresses the problem of semantic scene understanding under dense fog. Although considerable progress has been made in semantic scene understanding, it is mainly related to clear-weather scenes. Extending recognition methods to adverse weather conditions such as fog is crucial for outdoor applications. In this paper, we propose a novel method, named Curriculum Model Adaptation (CMAda), which gradually adapts a semantic segmentation model from light synthetic fog to dense real fog in multiple steps, using both synthetic and real foggy data. In addition, we present three other main stand-alone contributions: 1) a novel method to add synthetic fog to real, clear-weather scenes using semantic input; 2) a new fog density estimator; 3) the Foggy Zurich dataset comprising 38083808 real foggy images, with pixel-level semantic annotations for 1616 images with dense fog. Our experiments show that 1) our fog simulation slightly outperforms a state-of-the-art competing simulation with respect to the task of semantic foggy scene understanding (SFSU); 2) CMAda improves the performance of state-of-the-art models for SFSU significantly by leveraging unlabeled real foggy data. The datasets and code are publicly available.Comment: final version, ECCV 201

    Haze Removal in Color Images Using Hybrid Dark Channel Prior and Bilateral Filter

    Get PDF
    Haze formation is the combination of airlight and attenuation. Attenuation decreases the contrast and airlight increases the whiteness in the scene. Atmospheric conditions created by floting particles such as fog and haze, severely degrade image quality. Removing haze from a single image of a weather-degraded scene found to be a difficult task because the haze is dependent on the unknown depth information. Haze removal algorithms become more beneficial for many vision applications. It is found that most of the existing researchers have neglected many issues; i.e. no technique is accurate for different kind of circumstances. The existing methods have neglected many issues like noise reduction and uneven illumination which will be presented in the output image of the existing haze removal algorithms. This dissertation has proposed a new haze removal technique HDCP which will integrate dark channel prior with CLAHE to remove the haze from color images and bilateral filter is used to reduce noise from images. Poor visibility not only degrades the perceptual image quality but it also affects the performance of computer vision algorithms such as surveillance system, object detection, tracking and segmentation. The proposed algorithm is designed and implemented in MATLAB. The comparison between dark channel prior and the proposed algorithm is also drawn based upon some standard parameters. The comparison has shown that the proposed algorithm has shown quite effective results

    Real-time Defogging of Single Image of IoTs-based Surveillance Video Based on MAP

    Get PDF
    Due to the atmospheric scattering phenomenon in fog weather, the current monitoring video image defogging method cannot estimate the fog density of the image. This paper proposes a real-time defogging algorithm for single images of IoTs surveillance video based on maximum a posteriori (MAP). Under the condition of single image sequence, the posterior probability of the high-resolution single image is set to the maximum, which improves the MAP design super-resolution image reconstruction. This paper introduces fuzzy classification to calculate atmospheric light intensity, and obtains a single image of IoTs surveillance video by the atmospheric dissipation function. The improved algorithm has the largest signal-to-noise ratio after defogging, and the maximum value is as high as 40.99 dB. The average time for defogging of 7 experimental surveillance video images is only 2.22 s, and the real-time performance is better. It can be concluded that the proposed algorithm has excellent defogging performance and strong applicability

    Fast single image defogging with robust sky detection

    Get PDF
    Haze is a source of unreliability for computer vision applications in outdoor scenarios, and it is usually caused by atmospheric conditions. The Dark Channel Prior (DCP) has shown remarkable results in image defogging with three main limitations: 1) high time-consumption, 2) artifact generation, and 3) sky-region over-saturation. Therefore, current work has focused on improving processing time without losing restoration quality and avoiding image artifacts during image defogging. Hence in this research, a novel methodology based on depth approximations through DCP, local Shannon entropy, and Fast Guided Filter is proposed for reducing artifacts and improving image recovery on sky regions with low computation time. The proposed-method performance is assessed using more than 500 images from three datasets: Hybrid Subjective Testing Set from Realistic Single Image Dehazing (HSTS-RESIDE), the Synthetic Objective Testing Set from RESIDE (SOTS-RESIDE) and the HazeRD. Experimental results demonstrate that the proposed approach has an outstanding performance over state-of-the-art methods in reviewed literature, which is validated qualitatively and quantitatively through Peak Signal-to-Noise Ratio (PSNR), Naturalness Image Quality Evaluator (NIQE) and Structural SIMilarity (SSIM) index on retrieved images, considering different visual ranges, under distinct illumination and contrast conditions. Analyzing images with various resolutions, the method proposed in this work shows the lowest processing time under similar software and hardware conditions.This work was supported in part by the Centro en Investigaciones en Óptica (CIO) and the Consejo Nacional de Ciencia y Tecnología (CONACYT), and in part by the Barcelona Supercomputing Center.Peer ReviewedPostprint (published version

    Markov Random Field model for single image defogging

    Full text link
    Fog reduces contrast and thus the visibility of vehicles and obstacles for drivers. Each year, this causes traffic accidents. Fog is caused by a high concentration of very fine water droplets in the air. When light hits these droplets, it is scattered and this results in a dense white background, called the atmospheric veil. As pointed in [1], Advanced Driver Assistance Systems (ADAS) based on the display of defogged images from a camera may help the driver by improving objects visibility in the image and thus may leads to a decrease of fatality and injury rates. In the last few years, the problem of single image defogging has attracted attention in the image processing community. Being an ill-posed problem, several methods have been proposed. However, a few among of these methods are dedicated to the processing of road images. One of the first exception is the method in [2], [1] where a planar constraint is introduced to improve the restoration of the road area, assuming an approximately flat road. The single image defogging problem being ill-posed, the choice of the Bayesian approach seems adequate to set this problem as an inference problem. A first Markov Random Field (MRF) approach of the problem has been proposed recently in [3]. However, this method is not dedicated to road images. In this paper, we propose a novel MRF model of the single image defogging problem which applies to all kinds of images but can also easily be refined to obtain better results on road images using the planar constraint. A comparative study and quantitative evaluation with several state-of-the-art algorithms is presented. This evaluation demonstrates that the proposed MRF model allows to derive a new algorithm which produces better quality results, in particular in case of a noisy input image
    corecore