820 research outputs found

    Full waveform LiDAR for adverse weather conditions

    Get PDF

    Survey on LiDAR Perception in Adverse Weather Conditions

    Full text link
    Autonomous vehicles rely on a variety of sensors to gather information about their surrounding. The vehicle's behavior is planned based on the environment perception, making its reliability crucial for safety reasons. The active LiDAR sensor is able to create an accurate 3D representation of a scene, making it a valuable addition for environment perception for autonomous vehicles. Due to light scattering and occlusion, the LiDAR's performance change under adverse weather conditions like fog, snow or rain. This limitation recently fostered a large body of research on approaches to alleviate the decrease in perception performance. In this survey, we gathered, analyzed, and discussed different aspects on dealing with adverse weather conditions in LiDAR-based environment perception. We address topics such as the availability of appropriate data, raw point cloud processing and denoising, robust perception algorithms and sensor fusion to mitigate adverse weather induced shortcomings. We furthermore identify the most pressing gaps in the current literature and pinpoint promising research directions.Comment: published at IEEE IV 202

    RADIATE: A Radar Dataset for Automotive Perception in Bad Weather

    Full text link
    Datasets for autonomous cars are essential for the development and benchmarking of perception systems. However, most existing datasets are captured with camera and LiDAR sensors in good weather conditions. In this paper, we present the RAdar Dataset In Adverse weaThEr (RADIATE), aiming to facilitate research on object detection, tracking and scene understanding using radar sensing for safe autonomous driving. RADIATE includes 3 hours of annotated radar images with more than 200K labelled road actors in total, on average about 4.6 instances per radar image. It covers 8 different categories of actors in a variety of weather conditions (e.g., sun, night, rain, fog and snow) and driving scenarios (e.g., parked, urban, motorway and suburban), representing different levels of challenge. To the best of our knowledge, this is the first public radar dataset which provides high-resolution radar images on public roads with a large amount of road actors labelled. The data collected in adverse weather, e.g., fog and snowfall, is unique. Some baseline results of radar based object detection and recognition are given to show that the use of radar data is promising for automotive applications in bad weather, where vision and LiDAR can fail. RADIATE also has stereo images, 32-channel LiDAR and GPS data, directed at other applications such as sensor fusion, localisation and mapping. The public dataset can be accessed at http://pro.hw.ac.uk/radiate/.Comment: Accepted at IEEE International Conference on Robotics and Automation 2021 (ICRA 2021

    A Benchmark for Lidar Sensors in Fog: Is Detection Breaking Down?

    Full text link
    Autonomous driving at level five does not only means self-driving in the sunshine. Adverse weather is especially critical because fog, rain, and snow degrade the perception of the environment. In this work, current state of the art light detection and ranging (lidar) sensors are tested in controlled conditions in a fog chamber. We present current problems and disturbance patterns for four different state of the art lidar systems. Moreover, we investigate how tuning internal parameters can improve their performance in bad weather situations. This is of great importance because most state of the art detection algorithms are based on undisturbed lidar data

    Neural LiDAR Fields for Novel View Synthesis

    Full text link
    We present Neural Fields for LiDAR (NFL), a method to optimise a neural field scene representation from LiDAR measurements, with the goal of synthesizing realistic LiDAR scans from novel viewpoints. NFL combines the rendering power of neural fields with a detailed, physically motivated model of the LiDAR sensing process, thus enabling it to accurately reproduce key sensor behaviors like beam divergence, secondary returns, and ray dropping. We evaluate NFL on synthetic and real LiDAR scans and show that it outperforms explicit reconstruct-then-simulate methods as well as other NeRF-style methods on LiDAR novel view synthesis task. Moreover, we show that the improved realism of the synthesized views narrows the domain gap to real scans and translates to better registration and semantic segmentation performance.Comment: ICCV 2023 - camera ready. Project page: https://research.nvidia.com/labs/toronto-ai/nfl

    Big Earth Data for Cultural Heritage in the Copernicus Era

    Get PDF
    Digital data is stepping in its golden age characterized by an increasing growth of both classical and emerging big earth data along with trans- and multidisciplinary methodological approaches and services addressed to the study, preservation and sustainable exploitation of cultural heritage (CH). The availability of new digital technologies has opened new possibilities, unthinkable only a few years ago for cultural heritage. The currently available digital data, tools and services with particular reference to Copernicus initiatives make possible to characterize and understand the state of conservation of CH for preventive restoration and opened up a frontier of possibilities for the discovery of archaeological sites from above and also for supporting their excavation, monitoring and preservation. The different areas of intervention require the availability and integration of rigorous information from different sources for improving knowledge and interpretation, risk assessment and management in order to make more successful all the actions oriented to the preservation of cultural properties. One of the biggest challenges is to fully involve the citizen also from an emotional point of view connecting “pixels with people” and “bridging” remote sensing and social sensing

    LiDAR Snowfall Simulation for Robust 3D Object Detection

    Get PDF
    3D object detection is a central task for applications such as autonomous driving, in which the system needs to localize and classify surrounding traffic agents, even in the presence of adverse weather. In this paper, we address the problem of LiDAR-based 3D object detection under snowfall. Due to the difficulty of collecting and annotating training data in this setting, we propose a physically based method to simulate the effect of snowfall on real clear-weather LiDAR point clouds. Our method samples snow particles in 2D space for each LiDAR line and uses the induced geometry to modify the measurement for each LiDAR beam accordingly. Moreover, as snowfall often causes wetness on the ground, we also simulate ground wetness on LiDAR point clouds. We use our simulation to generate partially synthetic snowy LiDAR data and leverage these data for training 3D object detection models that are robust to snowfall. We conduct an extensive evaluation using several state-of-the-art 3D object detection methods and show that our simulation consistently yields significant performance gains on the real snowy STF dataset compared to clear-weather baselines and competing simulation approaches, while not sacrificing performance in clear weather. Our code is available at www.github.com/SysCV/LiDAR_snow_sim.Comment: Oral at CVPR 202

    Determination of Local Slope on the Greenland Ice Sheet Using a Multibeam Photon-Counting Lidar in Preparation for the ICESat-2 Mission

    Get PDF
    The greatest changes in elevation in Greenland and Antarctica are happening along the margins of the ice sheets where the surface frequently has significant slopes. For this reason, the upcoming Ice, Cloud, and land Elevation Satellite-2 (ICESat-2) mission utilizes pairs of laser altimeter beams that are perpendicular to the flight direction in order to extract slope information in addition to elevation. The Multiple Altimeter Beam Experimental Lidar (MABEL) is a high-altitude airborne laser altimeter designed as a simulator for ICESat-2. The MABEL design uses multiple beams at fixed angles and allows for local slope determination. Here, we present local slopes as determined by MABEL and compare them to those determined by the Airborne Topographic Mapper (ATM) over the same flight lines in Greenland. We make these comparisons with consideration for the planned ICESat-2 beam geometry. Results indicate that the mean slope residuals between MABEL and ATM remain small (< 0.05) through a wide range of localized slopes using ICESat-2 beam geometry. Furthermore, when MABEL data are subsampled by a factor of 4 to mimic the planned ICESat-2 transmit-energy configuration, the results are indistinguishable from the full-data-rate analysis. Results from MABEL suggest that ICESat-2 beam geometry and transmit-energy configuration are appropriate for the determination of slope on 90-m spatial scales, a measurement that will be fundamental to deconvolving the effects of surface slope from the ice-sheet surface change derived from ICESat-2

    Imaging through obscurants using time-correlated single-photon counting in the short-wave infrared

    Get PDF
    Single-photon time-of-flight (ToF) light detection and ranging (LiDAR) systems have emerged in recent years as a candidate technology for high-resolution depth imaging in challenging environments, such as long-range imaging and imaging in scattering media. This Thesis investigates the potential of two ToF single-photon depth imaging systems based on the time-correlated single-photon (TCSPC) technique for imaging targets in highly scattering environments. The high sensitivity and picosecond timing resolution afforded by the TCSPC technique offers high-resolution depth profiling of remote targets while maintaining low optical power levels. Both systems comprised a pulsed picosecond laser source with an operating wavelength of 1550 nm, and employed InGaAs/InP SPAD detectors. The main benefits of operating in the shortwave infrared (SWIR) band include improved atmospheric transmission, reduced solar background, as well as increased laser eye-safety thresholds over visible band sensors. Firstly, a monostatic scanning transceiver unit was used in conjunction with a single-element Peltier-cooled InGaAs/InP SPAD detector to attain sub-centimetre resolution three-dimensional images of long-range targets obscured by camouflage netting or in high levels of scattering media. Secondly, a bistatic system, which employed a 32 × 32 pixel format InGaAs/InP SPAD array was used to obtain rapid depth profiles of targets which were flood-illuminated by a higher power pulsed laser source. The performance of this system was assessed in indoor and outdoor scenarios in the presence of obscurants and high ambient background levels. Bespoke image processing algorithms were developed to reconstruct both the depth and intensity images for data with very low signal returns and short data acquisition times, illustrating the practicality of TCSPC-based LiDAR systems for real-time image acquisition in the SWIR wavelength region - even in the photon-starved regime.The Defence Science and Technology Laboratory ( Dstl) National PhD Schem
    corecore