3,405 research outputs found
CARLANE: A Lane Detection Benchmark for Unsupervised Domain Adaptation from Simulation to multiple Real-World Domains
Unsupervised Domain Adaptation demonstrates great potential to mitigate
domain shifts by transferring models from labeled source domains to unlabeled
target domains. While Unsupervised Domain Adaptation has been applied to a wide
variety of complex vision tasks, only few works focus on lane detection for
autonomous driving. This can be attributed to the lack of publicly available
datasets. To facilitate research in these directions, we propose CARLANE, a
3-way sim-to-real domain adaptation benchmark for 2D lane detection. CARLANE
encompasses the single-target datasets MoLane and TuLane and the multi-target
dataset MuLane. These datasets are built from three different domains, which
cover diverse scenes and contain a total of 163K unique images, 118K of which
are annotated. In addition we evaluate and report systematic baselines,
including our own method, which builds upon Prototypical Cross-domain
Self-supervised Learning. We find that false positive and false negative rates
of the evaluated domain adaptation methods are high compared to those of fully
supervised baselines. This affirms the need for benchmarks such as CARLANE to
further strengthen research in Unsupervised Domain Adaptation for lane
detection. CARLANE, all evaluated models and the corresponding implementations
are publicly available at https://carlanebenchmark.github.io.Comment: 36th Conference on Neural Information Processing Systems (NeurIPS
2022) Track on Datasets and Benchmarks, 22 pages, 11 figure
Model Adaptation with Synthetic and Real Data for Semantic Dense Foggy Scene Understanding
This work addresses the problem of semantic scene understanding under dense
fog. Although considerable progress has been made in semantic scene
understanding, it is mainly related to clear-weather scenes. Extending
recognition methods to adverse weather conditions such as fog is crucial for
outdoor applications. In this paper, we propose a novel method, named
Curriculum Model Adaptation (CMAda), which gradually adapts a semantic
segmentation model from light synthetic fog to dense real fog in multiple
steps, using both synthetic and real foggy data. In addition, we present three
other main stand-alone contributions: 1) a novel method to add synthetic fog to
real, clear-weather scenes using semantic input; 2) a new fog density
estimator; 3) the Foggy Zurich dataset comprising real foggy images,
with pixel-level semantic annotations for images with dense fog. Our
experiments show that 1) our fog simulation slightly outperforms a
state-of-the-art competing simulation with respect to the task of semantic
foggy scene understanding (SFSU); 2) CMAda improves the performance of
state-of-the-art models for SFSU significantly by leveraging unlabeled real
foggy data. The datasets and code are publicly available.Comment: final version, ECCV 201
VIENA2: A Driving Anticipation Dataset
Action anticipation is critical in scenarios where one needs to react before
the action is finalized. This is, for instance, the case in automated driving,
where a car needs to, e.g., avoid hitting pedestrians and respect traffic
lights. While solutions have been proposed to tackle subsets of the driving
anticipation tasks, by making use of diverse, task-specific sensors, there is
no single dataset or framework that addresses them all in a consistent manner.
In this paper, we therefore introduce a new, large-scale dataset, called
VIENA2, covering 5 generic driving scenarios, with a total of 25 distinct
action classes. It contains more than 15K full HD, 5s long videos acquired in
various driving conditions, weathers, daytimes and environments, complemented
with a common and realistic set of sensor measurements. This amounts to more
than 2.25M frames, each annotated with an action label, corresponding to 600
samples per action class. We discuss our data acquisition strategy and the
statistics of our dataset, and benchmark state-of-the-art action anticipation
techniques, including a new multi-modal LSTM architecture with an effective
loss function for action anticipation in driving scenarios.Comment: Accepted in ACCV 201
Real-Time Fully Unsupervised Domain Adaptation for Lane Detection in Autonomous Driving
While deep neural networks are being utilized heavily for autonomous driving,
they need to be adapted to new unseen environmental conditions for which they
were not trained. We focus on a safety critical application of lane detection,
and propose a lightweight, fully unsupervised, real-time adaptation approach
that only adapts the batch-normalization parameters of the model. We
demonstrate that our technique can perform inference, followed by on-device
adaptation, under a tight constraint of 30 FPS on Nvidia Jetson Orin. It shows
similar accuracy (avg. of 92.19%) as a state-of-the-art semi-supervised
adaptation algorithm but which does not support real-time adaptation.Comment: Accepted in 2023 Design, Automation & Test in Europe Conference (DATE
2023) - Late Breaking Result
- …