715 research outputs found

    Rain Removal in Traffic Surveillance: Does it Matter?

    Get PDF
    Varying weather conditions, including rainfall and snowfall, are generally regarded as a challenge for computer vision algorithms. One proposed solution to the challenges induced by rain and snowfall is to artificially remove the rain from images or video using rain removal algorithms. It is the promise of these algorithms that the rain-removed image frames will improve the performance of subsequent segmentation and tracking algorithms. However, rain removal algorithms are typically evaluated on their ability to remove synthetic rain on a small subset of images. Currently, their behavior is unknown on real-world videos when integrated with a typical computer vision pipeline. In this paper, we review the existing rain removal algorithms and propose a new dataset that consists of 22 traffic surveillance sequences under a broad variety of weather conditions that all include either rain or snowfall. We propose a new evaluation protocol that evaluates the rain removal algorithms on their ability to improve the performance of subsequent segmentation, instance segmentation, and feature tracking algorithms under rain and snow. If successful, the de-rained frames of a rain removal algorithm should improve segmentation performance and increase the number of accurately tracked features. The results show that a recent single-frame-based rain removal algorithm increases the segmentation performance by 19.7% on our proposed dataset, but it eventually decreases the feature tracking performance and showed mixed results with recent instance segmentation methods. However, the best video-based rain removal algorithm improves the feature tracking accuracy by 7.72%.Comment: Published in IEEE Transactions on Intelligent Transportation System

    LiDAR Snowfall Simulation for Robust 3D Object Detection

    Get PDF
    3D object detection is a central task for applications such as autonomous driving, in which the system needs to localize and classify surrounding traffic agents, even in the presence of adverse weather. In this paper, we address the problem of LiDAR-based 3D object detection under snowfall. Due to the difficulty of collecting and annotating training data in this setting, we propose a physically based method to simulate the effect of snowfall on real clear-weather LiDAR point clouds. Our method samples snow particles in 2D space for each LiDAR line and uses the induced geometry to modify the measurement for each LiDAR beam accordingly. Moreover, as snowfall often causes wetness on the ground, we also simulate ground wetness on LiDAR point clouds. We use our simulation to generate partially synthetic snowy LiDAR data and leverage these data for training 3D object detection models that are robust to snowfall. We conduct an extensive evaluation using several state-of-the-art 3D object detection methods and show that our simulation consistently yields significant performance gains on the real snowy STF dataset compared to clear-weather baselines and competing simulation approaches, while not sacrificing performance in clear weather. Our code is available at www.github.com/SysCV/LiDAR_snow_sim.Comment: Oral at CVPR 202

    Survey on LiDAR Perception in Adverse Weather Conditions

    Full text link
    Autonomous vehicles rely on a variety of sensors to gather information about their surrounding. The vehicle's behavior is planned based on the environment perception, making its reliability crucial for safety reasons. The active LiDAR sensor is able to create an accurate 3D representation of a scene, making it a valuable addition for environment perception for autonomous vehicles. Due to light scattering and occlusion, the LiDAR's performance change under adverse weather conditions like fog, snow or rain. This limitation recently fostered a large body of research on approaches to alleviate the decrease in perception performance. In this survey, we gathered, analyzed, and discussed different aspects on dealing with adverse weather conditions in LiDAR-based environment perception. We address topics such as the availability of appropriate data, raw point cloud processing and denoising, robust perception algorithms and sensor fusion to mitigate adverse weather induced shortcomings. We furthermore identify the most pressing gaps in the current literature and pinpoint promising research directions.Comment: published at IEEE IV 202

    Using crowdsourced web content for informing water systems operations in snow-dominated catchments

    Get PDF
    Snow is a key component of the hydrologic cycle in many regions of the world. Despite recent advances in environmental monitoring that are making a wide range of data available, continuous snow monitoring systems that can collect data at high spatial and temporal resolution are not well established yet, especially in inaccessible high-latitude or mountainous regions. The unprecedented availability of user-generated data on the web is opening new opportunities for enhancing real-time monitoring and modeling of environmental systems based on data that are public, low-cost, and spatiotemporally dense. In this paper, we contribute a novel crowdsourcing procedure for extracting snow-related information from public web images, either produced by users or generated by touristic webcams. A fully automated process fetches mountain images from multiple sources, identifies the peaks present therein, and estimates virtual snow indexes representing a proxy of the snow-covered area. Our procedure has the potential for complementing traditional snow-related information, minimizing costs and efforts for obtaining the virtual snow indexes and, at the same time, maximizing the portability of the procedure to several locations where such public images are available. The operational value of the obtained virtual snow indexes is assessed for a real-world water-management problem, the regulation of Lake Como, where we use these indexes for informing the daily operations of the lake. Numerical results show that such information is effective in extending the anticipation capacity of the lake operations, ultimately improving the system performance

    Parametric analysis of an imaging radar for use as an imaging radar for use as an independent landing monitor

    Get PDF
    The capabilities are analyzed of a real aperture, forward-looking imaging radar for use as an independent landing monitor, which will provide the pilot with an independent means of assessing the progress of an automatic landing during Category 3 operations. The analysis shows that adequate ground resolution and signal-to-noise ratio can be obtained to image a runway with grassy surroundings using a radar operating at 35 GHz in good weather and in most fog but that performance is severely degraded in moderate to heavy rain and wet snow. Weather effects on a 10 GHz imager are not serious, with the possible exception of very heavy rain, but the azimuthal resolution at 10 GHz is inadequate with antennas up to 2 m long

    Full waveform LiDAR for adverse weather conditions

    Get PDF

    Wide area detection system: Conceptual design study

    Get PDF
    An integrated sensor for traffic surveillance on mainline sections of urban freeways is described. Applicable imaging and processor technology is surveyed and the functional requirements for the sensors and the conceptual design of the breadboard sensors are given. Parameters measured by the sensors include lane density, speed, and volume. The freeway image is also used for incident diagnosis

    Exploration of the characteristics and trends of electric vehicle crashes: a case study in Norway

    Get PDF
    With the rapid growth of electric vehicles (EVs) in the past decade, many new traffic safety challenges are also emerging. With the crash data of Norway from 2011 to 2018, this study gives an overview of the status quo of EV crashes. In the survey period, the proportion of EV crashes in total traffic crashes had risen from zero to 3.11% in Norway. However, in terms of severity, EV crashes do not show statistically significant differences from the Internal Combustion Engine Vehicle (ICEV) crashes. Compared to ICEV crashes, the occurrence of EV crashes features on weekday peak hours, urban areas, roadway junctions, low-speed roadways, and good visibility scenarios, which can be attributed to the fact that EVs are mainly used for urban local commuting travels in Norway. Besides, EVs are confirmed to be much more likely to collide with cyclists and pedestrians, probably due to their low-noise engines. Then, the separate logistic regression models are built to identify important factors influencing the severity of ICEV and EV crashes, respectively. Many factors show very different effects on ICEV and EV crashes, which implies the necessity of reevaluating many current traffic safety strategies in the face of the EV era. Although the Norway data is analyzed here, the findings are expected to provide new insights to other countries also in the process of the complete automotive electrification
    corecore