11 research outputs found
Benchmarking Image Sensors Under Adverse Weather Conditions for Autonomous Driving
Adverse weather conditions are very challenging for autonomous driving
because most of the state-of-the-art sensors stop working reliably under these
conditions. In order to develop robust sensors and algorithms, tests with
current sensors in defined weather conditions are crucial for determining the
impact of bad weather for each sensor. This work describes a testing and
evaluation methodology that helps to benchmark novel sensor technologies and
compare them to state-of-the-art sensors. As an example, gated imaging is
compared to standard imaging under foggy conditions. It is shown that gated
imaging outperforms state-of-the-art standard passive imaging due to
time-synchronized active illumination
Guided Curriculum Model Adaptation and Uncertainty-Aware Evaluation for Semantic Nighttime Image Segmentation
Most progress in semantic segmentation reports on daytime images taken under
favorable illumination conditions. We instead address the problem of semantic
segmentation of nighttime images and improve the state-of-the-art, by adapting
daytime models to nighttime without using nighttime annotations. Moreover, we
design a new evaluation framework to address the substantial uncertainty of
semantics in nighttime images. Our central contributions are: 1) a curriculum
framework to gradually adapt semantic segmentation models from day to night via
labeled synthetic images and unlabeled real images, both for progressively
darker times of day, which exploits cross-time-of-day correspondences for the
real images to guide the inference of their labels; 2) a novel
uncertainty-aware annotation and evaluation framework and metric for semantic
segmentation, designed for adverse conditions and including image regions
beyond human recognition capability in the evaluation in a principled fashion;
3) the Dark Zurich dataset, which comprises 2416 unlabeled nighttime and 2920
unlabeled twilight images with correspondences to their daytime counterparts
plus a set of 151 nighttime images with fine pixel-level annotations created
with our protocol, which serves as a first benchmark to perform our novel
evaluation. Experiments show that our guided curriculum adaptation
significantly outperforms state-of-the-art methods on real nighttime sets both
for standard metrics and our uncertainty-aware metric. Furthermore, our
uncertainty-aware evaluation reveals that selective invalidation of predictions
can lead to better results on data with ambiguous content such as our nighttime
benchmark and profit safety-oriented applications which involve invalid inputs.Comment: ICCV 2019 camera-read
A Benchmark for Lidar Sensors in Fog: Is Detection Breaking Down?
Autonomous driving at level five does not only means self-driving in the
sunshine. Adverse weather is especially critical because fog, rain, and snow
degrade the perception of the environment. In this work, current state of the
art light detection and ranging (lidar) sensors are tested in controlled
conditions in a fog chamber. We present current problems and disturbance
patterns for four different state of the art lidar systems. Moreover, we
investigate how tuning internal parameters can improve their performance in bad
weather situations. This is of great importance because most state of the art
detection algorithms are based on undisturbed lidar data
Learning Super-resolved Depth from Active Gated Imaging
Environment perception for autonomous driving is doomed by the trade-off
between range-accuracy and resolution: current sensors that deliver very
precise depth information are usually restricted to low resolution because of
technology or cost limitations. In this work, we exploit depth information from
an active gated imaging system based on cost-sensitive diode and CMOS
technology. Learning a mapping between pixel intensities of three gated slices
and depth produces a super-resolved depth map image with respectable relative
accuracy of 5% in between 25-80 m. By design, depth information is perfectly
aligned with pixel intensity values
Semantic Understanding of Foggy Scenes with Purely Synthetic Data
This work addresses the problem of semantic scene understanding under foggy
road conditions. Although marked progress has been made in semantic scene
understanding over the recent years, it is mainly concentrated on clear weather
outdoor scenes. Extending semantic segmentation methods to adverse weather
conditions like fog is crucially important for outdoor applications such as
self-driving cars. In this paper, we propose a novel method, which uses purely
synthetic data to improve the performance on unseen real-world foggy scenes
captured in the streets of Zurich and its surroundings. Our results highlight
the potential and power of photo-realistic synthetic images for training and
especially fine-tuning deep neural nets. Our contributions are threefold, 1) we
created a purely synthetic, high-quality foggy dataset of 25,000 unique outdoor
scenes, that we call Foggy Synscapes and plan to release publicly 2) we show
that with this data we outperform previous approaches on real-world foggy test
data 3) we show that a combination of our data and previously used data can
even further improve the performance on real-world foggy data.Comment: independent class IoU scores corrected for BiSiNet architectur
Pixel-Accurate Depth Evaluation in Realistic Driving Scenarios
This work introduces an evaluation benchmark for depth estimation and
completion using high-resolution depth measurements with angular resolution of
up to 25" (arcsecond), akin to a 50 megapixel camera with per-pixel depth
available. Existing datasets, such as the KITTI benchmark, provide only sparse
reference measurements with an order of magnitude lower angular resolution -
these sparse measurements are treated as ground truth by existing depth
estimation methods. We propose an evaluation methodology in four characteristic
automotive scenarios recorded in varying weather conditions (day, night, fog,
rain). As a result, our benchmark allows us to evaluate the robustness of depth
sensing methods in adverse weather and different driving conditions. Using the
proposed evaluation data, we demonstrate that current stereo approaches provide
significantly more stable depth estimates than monocular methods and lidar
completion in adverse weather. Data and code are available at
https://github.com/gruberto/PixelAccurateDepthBenchmark.git.Comment: 3DV 201
Map-Guided Curriculum Domain Adaptation and Uncertainty-Aware Evaluation for Semantic Nighttime Image Segmentation
We address the problem of semantic nighttime image segmentation and improve
the state-of-the-art, by adapting daytime models to nighttime without using
nighttime annotations. Moreover, we design a new evaluation framework to
address the substantial uncertainty of semantics in nighttime images. Our
central contributions are: 1) a curriculum framework to gradually adapt
semantic segmentation models from day to night through progressively darker
times of day, exploiting cross-time-of-day correspondences between daytime
images from a reference map and dark images to guide the label inference in the
dark domains; 2) a novel uncertainty-aware annotation and evaluation framework
and metric for semantic segmentation, including image regions beyond human
recognition capability in the evaluation in a principled fashion; 3) the Dark
Zurich dataset, comprising 2416 unlabeled nighttime and 2920 unlabeled twilight
images with correspondences to their daytime counterparts plus a set of 201
nighttime images with fine pixel-level annotations created with our protocol,
which serves as a first benchmark for our novel evaluation. Experiments show
that our map-guided curriculum adaptation significantly outperforms
state-of-the-art methods on nighttime sets both for standard metrics and our
uncertainty-aware metric. Furthermore, our uncertainty-aware evaluation reveals
that selective invalidation of predictions can improve results on data with
ambiguous content such as our benchmark and profit safety-oriented applications
involving invalid inputs.Comment: IEEE T-PAMI 202
Can Automated Vehicles "See" in Minnesota? Ambient Particle Effects on LiDAR
(c)1035427This project will use a combination of laboratory experimentation and road demonstrations to better understand the reduction of LiDAR signal and object detection capability under adverse weather conditions found in Minnesota. It will also lead to concepts to improve LiDAR systems to adapt to such conditions through better signal processing image recognition software