18 research outputs found

    Model Adaptation with Synthetic and Real Data for Semantic Dense Foggy Scene Understanding

    Full text link
    This work addresses the problem of semantic scene understanding under dense fog. Although considerable progress has been made in semantic scene understanding, it is mainly related to clear-weather scenes. Extending recognition methods to adverse weather conditions such as fog is crucial for outdoor applications. In this paper, we propose a novel method, named Curriculum Model Adaptation (CMAda), which gradually adapts a semantic segmentation model from light synthetic fog to dense real fog in multiple steps, using both synthetic and real foggy data. In addition, we present three other main stand-alone contributions: 1) a novel method to add synthetic fog to real, clear-weather scenes using semantic input; 2) a new fog density estimator; 3) the Foggy Zurich dataset comprising 38083808 real foggy images, with pixel-level semantic annotations for 1616 images with dense fog. Our experiments show that 1) our fog simulation slightly outperforms a state-of-the-art competing simulation with respect to the task of semantic foggy scene understanding (SFSU); 2) CMAda improves the performance of state-of-the-art models for SFSU significantly by leveraging unlabeled real foggy data. The datasets and code are publicly available.Comment: final version, ECCV 201

    Semantic Understanding of Foggy Scenes with Purely Synthetic Data

    Full text link
    This work addresses the problem of semantic scene understanding under foggy road conditions. Although marked progress has been made in semantic scene understanding over the recent years, it is mainly concentrated on clear weather outdoor scenes. Extending semantic segmentation methods to adverse weather conditions like fog is crucially important for outdoor applications such as self-driving cars. In this paper, we propose a novel method, which uses purely synthetic data to improve the performance on unseen real-world foggy scenes captured in the streets of Zurich and its surroundings. Our results highlight the potential and power of photo-realistic synthetic images for training and especially fine-tuning deep neural nets. Our contributions are threefold, 1) we created a purely synthetic, high-quality foggy dataset of 25,000 unique outdoor scenes, that we call Foggy Synscapes and plan to release publicly 2) we show that with this data we outperform previous approaches on real-world foggy test data 3) we show that a combination of our data and previously used data can even further improve the performance on real-world foggy data.Comment: independent class IoU scores corrected for BiSiNet architectur

    Visibility Estimation of Traffic Signals under Rainy Weather Conditions for Smart Driving Support

    Get PDF
    Abstract-The aim of this work is to support a driver by notifying the information of traffic signals in accordance with their visibility. To avoid traffic accidents, the driver should detect and recognize surrounding objects, especially traffic signals. However, when driving a vehicle under rainy weather conditions, it is difficult for drivers to detect or to recognize objects existing in the road environment in comparison with fine weather conditions. Therefore, this paper proposes a method for estimating the visibility of traffic signals for drivers under rainy weather conditions by image processing. The proposed method is based on the concept of visual noise known in the field of cognitive science, and extracts two types of visual noise features which ware considered that they affect the visibility of traffic signals. We expect to improve the accuracy of visibility estimation by combining the visual noise features with the texture feature introduced in a previous work. Experimental results showed that the proposed method could estimate the visibility of traffic signals more accurately under rainy weather conditions

    WeatherNet: Recognising weather and visual conditions from street-level images using deep residual learning

    Get PDF
    Extracting information related to weather and visual conditions at a given time and space is indispensable for scene awareness, which strongly impacts our behaviours, from simply walking in a city to riding a bike, driving a car, or autonomous drive-assistance. Despite the significance of this subject, it is still not been fully addressed by the machine intelligence relying on deep learning and computer vision to detect the multi-labels of weather and visual conditions with a unified method that can be easily used for practice. What has been achieved to-date is rather sectorial models that address limited number of labels that do not cover the wide spectrum of weather and visual conditions. Nonetheless, weather and visual conditions are often addressed individually. In this paper, we introduce a novel framework to automatically extract this information from street-level images relying on deep learning and computer vision using a unified method without any pre-defined constraints in the processed images. A pipeline of four deep Convolutional Neural Network (CNN) models, so-called the WeatherNet, is trained, relying on residual learning using ResNet50 architecture, to extract various weather and visual conditions such as Dawn/dusk, day and night for time detection, and glare for lighting conditions, and clear, rainy, snowy, and foggy for weather conditions. The WeatherNet shows strong performance in extracting this information from user-defined images or video streams that can be used not limited to: autonomous vehicles and drive-assistance systems, tracking behaviours, safety-related research, or even for better understanding cities through images for policy-makers.Comment: 11 pages, 8 figure

    Markov Random Field model for single image defogging

    Full text link
    Fog reduces contrast and thus the visibility of vehicles and obstacles for drivers. Each year, this causes traffic accidents. Fog is caused by a high concentration of very fine water droplets in the air. When light hits these droplets, it is scattered and this results in a dense white background, called the atmospheric veil. As pointed in [1], Advanced Driver Assistance Systems (ADAS) based on the display of defogged images from a camera may help the driver by improving objects visibility in the image and thus may leads to a decrease of fatality and injury rates. In the last few years, the problem of single image defogging has attracted attention in the image processing community. Being an ill-posed problem, several methods have been proposed. However, a few among of these methods are dedicated to the processing of road images. One of the first exception is the method in [2], [1] where a planar constraint is introduced to improve the restoration of the road area, assuming an approximately flat road. The single image defogging problem being ill-posed, the choice of the Bayesian approach seems adequate to set this problem as an inference problem. A first Markov Random Field (MRF) approach of the problem has been proposed recently in [3]. However, this method is not dedicated to road images. In this paper, we propose a novel MRF model of the single image defogging problem which applies to all kinds of images but can also easily be refined to obtain better results on road images using the planar constraint. A comparative study and quantitative evaluation with several state-of-the-art algorithms is presented. This evaluation demonstrates that the proposed MRF model allows to derive a new algorithm which produces better quality results, in particular in case of a noisy input image

    Daytime visibility range monitoring through use of a roadside camera

    Get PDF
    Tra le innumerevoli opere digitalizzate disponibili sul sito della Biblioteca digitale francese Gallica merita una menzione speciale la collezione dei dizionari biografici: si tratta di opere enciclopediche di dimensioni talvolta monumentali, sicuramente ben note ai frequentatori delle sale di consultazione delle biblioteche. Elenco dei dizionari biografici digitalizzati attualmente disponibili su Gallica in formato immagine (riproduzione facsimilare): Biographie des 750 représentants à l'As..

    Daytime visibility range monitoring through use of a roadside camera

    Full text link
    Based on a road meteorology standard, we present a roadside camera-based system able to detect daytime fog and to estimate the visibility range. Two detection algorithms, both based on a daytime fog model, are presented along with a process to combine their outputs. Unlike previous methods, the system takes into account the 3-D scene structure and filters the moving objects from the region of interest through use of a background modelling approach and detects the cause of the visibility reduction. The study of the system accuracy with respect to the camera characteristics leads to a specification of the characteristics of the camera required for the system. Some results obtained using a reduced-scale prototyping of the system are presented. Finally, an outlook to future works is given

    Advisory speed for Intelligent Speed Adaptation in adverse conditions

    Full text link
    In this paper, a novel approach to compute advisory speeds to be used in an adaptive Intelligent Speed Adaptation system (ISA) is proposed. This method is designed to be embedded in the vehicles. It estimates an appropriate speed by fusing in real-time the outputs of ego sensors which detect adverse conditions with roadway characteristics transmitted by distant servers. The method presents two major novelties. First, the 85 th percentile of observed speeds (V 85 ) is estimated along a road, this speed profile is considered as a reference speed practised and practicable in ideal conditions for a lonely vehicle. In adverse conditions, this reference speed is modulated in order to account for lowered friction and lowered visibility distance (top-down approach). Second, this method allows us taking into account the potential seriousness of crashes using a generic scenario of accident. Within this scenario, the difference in speed that should be applied in adverse conditions is estimated so that global injury risk is the same as in ideal conditions

    Visibility And Confidence Estimation Of An Onboard-Camera Image For An Intelligent Vehicle

    Get PDF
    More and more drivers nowadays enjoy the convenience brought by advanced driver assistances system (ADAS) including collision detection, lane keeping and ACC. However, many assistant functions are still constrained by weather and terrain. In the way towards automated driving, the need of an automatic condition detector is inevitable, since many solutions only work for certain conditions. When it comes to camera, which is most commonly used tool in lane detection, obstacle detection, visibility estimation is one of such important parameters we need to analyze. Although many papers have proposed their own ways to estimate visibility range, there is little research on the question of how to estimate the confidence of an image. In this thesis, we introduce a new way to detect visual distance based on a monocular camera, and thereby we calculate the overall image confidence. Much progresses has been achieved in the past ten years from restoration of foggy images, real-time fog detection to weather classification. However, each method has its own drawbacks, ranging from complexity, cost, and inaccuracy. According to these considerations, the new way we proposed to estimate visibility range is based on a single vision system. In addition, this method can maintain a relatively robust estimation and produce a more accurate result
    corecore