1,080 research outputs found

    Estimating meteorological visibility range under foggy weather conditions: A deep learning approach

    Get PDF
    © 2018 The Authors. Published by Elsevier Ltd. Systems capable of estimating visibility distances under foggy weather conditions are extremely useful for next-generation cooperative situational awareness and collision avoidance systems. In this paper, we present a brief review of noticeable approaches for determining visibility distance under foggy weather conditions. We then propose a novel approach based on the combination of a deep learning method for feature extraction and an SVM classifier. We present a quantitative evaluation of the proposed solution and show that our approach provides better performance results compared to an earlier approach that was based on the combination of an ANN model and a set of global feature descriptors. Our experimental results show that the proposed solution presents very promising results in support for next-generation situational awareness and cooperative collision avoidance systems. Hence it can potentially contribute towards safer driving conditions in the presence of fog

    Visibility And Confidence Estimation Of An Onboard-Camera Image For An Intelligent Vehicle

    Get PDF
    More and more drivers nowadays enjoy the convenience brought by advanced driver assistances system (ADAS) including collision detection, lane keeping and ACC. However, many assistant functions are still constrained by weather and terrain. In the way towards automated driving, the need of an automatic condition detector is inevitable, since many solutions only work for certain conditions. When it comes to camera, which is most commonly used tool in lane detection, obstacle detection, visibility estimation is one of such important parameters we need to analyze. Although many papers have proposed their own ways to estimate visibility range, there is little research on the question of how to estimate the confidence of an image. In this thesis, we introduce a new way to detect visual distance based on a monocular camera, and thereby we calculate the overall image confidence. Much progresses has been achieved in the past ten years from restoration of foggy images, real-time fog detection to weather classification. However, each method has its own drawbacks, ranging from complexity, cost, and inaccuracy. According to these considerations, the new way we proposed to estimate visibility range is based on a single vision system. In addition, this method can maintain a relatively robust estimation and produce a more accurate result

    Model Adaptation with Synthetic and Real Data for Semantic Dense Foggy Scene Understanding

    Full text link
    This work addresses the problem of semantic scene understanding under dense fog. Although considerable progress has been made in semantic scene understanding, it is mainly related to clear-weather scenes. Extending recognition methods to adverse weather conditions such as fog is crucial for outdoor applications. In this paper, we propose a novel method, named Curriculum Model Adaptation (CMAda), which gradually adapts a semantic segmentation model from light synthetic fog to dense real fog in multiple steps, using both synthetic and real foggy data. In addition, we present three other main stand-alone contributions: 1) a novel method to add synthetic fog to real, clear-weather scenes using semantic input; 2) a new fog density estimator; 3) the Foggy Zurich dataset comprising 38083808 real foggy images, with pixel-level semantic annotations for 1616 images with dense fog. Our experiments show that 1) our fog simulation slightly outperforms a state-of-the-art competing simulation with respect to the task of semantic foggy scene understanding (SFSU); 2) CMAda improves the performance of state-of-the-art models for SFSU significantly by leveraging unlabeled real foggy data. The datasets and code are publicly available.Comment: final version, ECCV 201

    Marker based Thermal-Inertial Localization for Aerial Robots in Obscurant Filled Environments

    Full text link
    For robotic inspection tasks in known environments fiducial markers provide a reliable and low-cost solution for robot localization. However, detection of such markers relies on the quality of RGB camera data, which degrades significantly in the presence of visual obscurants such as fog and smoke. The ability to navigate known environments in the presence of obscurants can be critical for inspection tasks especially, in the aftermath of a disaster. Addressing such a scenario, this work proposes a method for the design of fiducial markers to be used with thermal cameras for the pose estimation of aerial robots. Our low cost markers are designed to work in the long wave infrared spectrum, which is not affected by the presence of obscurants, and can be affixed to any object that has measurable temperature difference with respect to its surroundings. Furthermore, the estimated pose from the fiducial markers is fused with inertial measurements in an extended Kalman filter to remove high frequency noise and error present in the fiducial pose estimates. The proposed markers and the pose estimation method are experimentally evaluated in an obscurant filled environment using an aerial robot carrying a thermal camera.Comment: 10 pages, 5 figures, Published in International Symposium on Visual Computing 201

    Investigation of advanced navigation and guidance system concepts for all-weather rotorcraft operations

    Get PDF
    Results are presented of a survey conducted of active helicopter operators to determine the extent to which they wish to operate in IMC conditions, the visibility limits under which they would operate, the revenue benefits to be gained, and the percent of aircraft cost they would pay for such increased capability. Candidate systems were examined for capability to meet the requirements of a mission model constructed to represent the modes of flight normally encountered in low visibility conditions. Recommendations are made for development of high resolution radar, simulation of the control display system for steep approaches, and for development of an obstacle sensing system for detecting wires. A cost feasibility analysis is included

    Machine Vision Identification of Airport Runways With Visible and Infrared Videos

    Get PDF
    A widely used machine vision pipeline based on the Speeded-Up Robust Features feature detector was applied to the problem of identifying a runway from a universe of known runways, which was constructed using video records of 19 straight-in glidepath approaches to nine runways. The recordings studied included visible, short-wave infrared, and long-wave infrared videos in clear conditions, rain, and fog. Both daytime and nighttime runway approaches were used. High detection specificity (identification of the runway approached and rejection of the other runways in the universe) was observed in all conditions (greater than 90% Bayesian posterior probability). In the visible band, repeatability (identification of a given runway across multiple videos of it) was observed only if illumination (day versus night) was the same and approach visibility was good. Some repeatability was found across visible and shortwave sensor bands. Camera-based geolocation during aircraft landing was compared to the standard Charted Visual Approach Procedure

    Resilient Perception for Outdoor Unmanned Ground Vehicles

    Get PDF
    This thesis promotes the development of resilience for perception systems with a focus on Unmanned Ground Vehicles (UGVs) in adverse environmental conditions. Perception is the interpretation of sensor data to produce a representation of the environment that is necessary for subsequent decision making. Long-term autonomy requires perception systems that correctly function in unusual but realistic conditions that will eventually occur during extended missions. State-of-the-art UGV systems can fail when the sensor data are beyond the operational capacity of the perception models. The key to resilient perception system lies in the use of multiple sensor modalities and the pre-selection of appropriate sensor data to minimise the chance of failure. This thesis proposes a framework based on diagnostic principles to evaluate and preselect sensor data prior to interpretation by the perception system. Image-based quality metrics are explored and evaluated experimentally using infrared (IR) and visual cameras onboard a UGV in the presence of smoke and airborne dust. A novel quality metric, Spatial Entropy (SE), is introduced and evaluated. The proposed framework is applied to a state-of-the-art Visual-SLAM algorithm combining visual and IR imaging as a real-world example. An extensive experimental evaluation demonstrates that the framework allows for camera-based localisation that is resilient to a range of low-visibility conditions when compared to other methods that use a single sensor or combine sensor data without selection. The proposed framework allows for a resilient localisation in adverse conditions using image data but also has significant potential to benefit many perception applications. Employing multiple sensing modalities along with pre-selection of appropriate data is a powerful method to create resilient perception systems by anticipating and mitigating errors. The development of such resilient perception systems is a requirement for next-generation outdoor UGVs

    Selective combination of visual and thermal imaging for resilient localization in adverse conditions: Day and night, smoke and fire

    Get PDF
    Long-term autonomy in robotics requires perception systems that are resilient to unusual but realistic conditions that will eventually occur during extended missions. For example, unmanned ground vehicles (UGVs) need to be capable of operating safely in adverse and low-visibility conditions, such as at night or in the presence of smoke. The key to a resilient UGV perception system lies in the use of multiple sensor modalities, e.g., operating at different frequencies of the electromagnetic spectrum, to compensate for the limitations of a single sensor type. In this paper, visual and infrared imaging are combined in a Visual-SLAM algorithm to achieve localization. We propose to evaluate the quality of data provided by each sensor modality prior to data combination. This evaluation is used to discard low-quality data, i.e., data most likely to induce large localization errors. In this way, perceptual failures are anticipated and mitigated. An extensive experimental evaluation is conducted on data sets collected with a UGV in a range of environments and adverse conditions, including the presence of smoke (obstructing the visual camera), fire, extreme heat (saturating the infrared camera), low-light conditions (dusk), and at night with sudden variations of artificial light. A total of 240 trajectory estimates are obtained using five different variations of data sources and data combination strategies in the localization method. In particular, the proposed approach for selective data combination is compared to methods using a single sensor type or combining both modalities without preselection. We show that the proposed framework allows for camera-based localization resilient to a large range of low-visibility conditions

    Daytime visibility range monitoring through use of a roadside camera

    Get PDF
    Tra le innumerevoli opere digitalizzate disponibili sul sito della Biblioteca digitale francese Gallica merita una menzione speciale la collezione dei dizionari biografici: si tratta di opere enciclopediche di dimensioni talvolta monumentali, sicuramente ben note ai frequentatori delle sale di consultazione delle biblioteche. Elenco dei dizionari biografici digitalizzati attualmente disponibili su Gallica in formato immagine (riproduzione facsimilare): Biographie des 750 représentants à l'As..
    corecore