17 research outputs found
Benchmarking Image Sensors Under Adverse Weather Conditions for Autonomous Driving
Adverse weather conditions are very challenging for autonomous driving
because most of the state-of-the-art sensors stop working reliably under these
conditions. In order to develop robust sensors and algorithms, tests with
current sensors in defined weather conditions are crucial for determining the
impact of bad weather for each sensor. This work describes a testing and
evaluation methodology that helps to benchmark novel sensor technologies and
compare them to state-of-the-art sensors. As an example, gated imaging is
compared to standard imaging under foggy conditions. It is shown that gated
imaging outperforms state-of-the-art standard passive imaging due to
time-synchronized active illumination
A Benchmark for Lidar Sensors in Fog: Is Detection Breaking Down?
Autonomous driving at level five does not only means self-driving in the
sunshine. Adverse weather is especially critical because fog, rain, and snow
degrade the perception of the environment. In this work, current state of the
art light detection and ranging (lidar) sensors are tested in controlled
conditions in a fog chamber. We present current problems and disturbance
patterns for four different state of the art lidar systems. Moreover, we
investigate how tuning internal parameters can improve their performance in bad
weather situations. This is of great importance because most state of the art
detection algorithms are based on undisturbed lidar data
Pixel-Accurate Depth Evaluation in Realistic Driving Scenarios
This work introduces an evaluation benchmark for depth estimation and
completion using high-resolution depth measurements with angular resolution of
up to 25" (arcsecond), akin to a 50 megapixel camera with per-pixel depth
available. Existing datasets, such as the KITTI benchmark, provide only sparse
reference measurements with an order of magnitude lower angular resolution -
these sparse measurements are treated as ground truth by existing depth
estimation methods. We propose an evaluation methodology in four characteristic
automotive scenarios recorded in varying weather conditions (day, night, fog,
rain). As a result, our benchmark allows us to evaluate the robustness of depth
sensing methods in adverse weather and different driving conditions. Using the
proposed evaluation data, we demonstrate that current stereo approaches provide
significantly more stable depth estimates than monocular methods and lidar
completion in adverse weather. Data and code are available at
https://github.com/gruberto/PixelAccurateDepthBenchmark.git.Comment: 3DV 201
Seeing Through Fog Without Seeing Fog: Deep Multimodal Sensor Fusion in Unseen Adverse Weather
The fusion of multimodal sensor streams, such as camera, lidar, and radar
measurements, plays a critical role in object detection for autonomous
vehicles, which base their decision making on these inputs. While existing
methods exploit redundant information in good environmental conditions, they
fail in adverse weather where the sensory streams can be asymmetrically
distorted. These rare "edge-case" scenarios are not represented in available
datasets, and existing fusion architectures are not designed to handle them. To
address this challenge we present a novel multimodal dataset acquired in over
10,000km of driving in northern Europe. Although this dataset is the first
large multimodal dataset in adverse weather, with 100k labels for lidar,
camera, radar, and gated NIR sensors, it does not facilitate training as
extreme weather is rare. To this end, we present a deep fusion network for
robust fusion without a large corpus of labeled training data covering all
asymmetric distortions. Departing from proposal-level fusion, we propose a
single-shot model that adaptively fuses features, driven by measurement
entropy. We validate the proposed method, trained on clean data, on our
extensive validation dataset. Code and data are available here
https://github.com/princeton-computational-imaging/SeeingThroughFog
LiDAR Snowfall Simulation for Robust 3D Object Detection
3D object detection is a central task for applications such as autonomous
driving, in which the system needs to localize and classify surrounding traffic
agents, even in the presence of adverse weather. In this paper, we address the
problem of LiDAR-based 3D object detection under snowfall. Due to the
difficulty of collecting and annotating training data in this setting, we
propose a physically based method to simulate the effect of snowfall on real
clear-weather LiDAR point clouds. Our method samples snow particles in 2D space
for each LiDAR line and uses the induced geometry to modify the measurement for
each LiDAR beam accordingly. Moreover, as snowfall often causes wetness on the
ground, we also simulate ground wetness on LiDAR point clouds. We use our
simulation to generate partially synthetic snowy LiDAR data and leverage these
data for training 3D object detection models that are robust to snowfall. We
conduct an extensive evaluation using several state-of-the-art 3D object
detection methods and show that our simulation consistently yields significant
performance gains on the real snowy STF dataset compared to clear-weather
baselines and competing simulation approaches, while not sacrificing
performance in clear weather. Our code is available at
www.github.com/SysCV/LiDAR_snow_sim.Comment: Oral at CVPR 202
Thin On-Sensor Nanophotonic Array Cameras
Today's commodity camera systems rely on compound optics to map light
originating from the scene to positions on the sensor where it gets recorded as
an image. To record images without optical aberrations, i.e., deviations from
Gauss' linear model of optics, typical lens systems introduce increasingly
complex stacks of optical elements which are responsible for the height of
existing commodity cameras. In this work, we investigate flat nanophotonic
computational cameras as an alternative that employs an array of skewed
lenslets and a learned reconstruction approach. The optical array is embedded
on a metasurface that, at 700~nm height, is flat and sits on the sensor cover
glass at 2.5~mm focal distance from the sensor. To tackle the highly chromatic
response of a metasurface and design the array over the entire sensor, we
propose a differentiable optimization method that continuously samples over the
visible spectrum and factorizes the optical modulation for different incident
fields into individual lenses. We reconstruct a megapixel image from our flat
imager with a learned probabilistic reconstruction method that employs a
generative diffusion model to sample an implicit prior. To tackle
scene-dependent aberrations in broadband, we propose a method for acquiring
paired captured training data in varying illumination conditions. We assess the
proposed flat camera design in simulation and with an experimental prototype,
validating that the method is capable of recovering images from diverse scenes
in broadband with a single nanophotonic layer.Comment: 18 pages, 12 figures, to be published in ACM Transactions on Graphic