275 research outputs found
Collaborative Dynamic 3D Scene Graphs for Automated Driving
Maps have played an indispensable role in enabling safe and automated
driving. Although there have been many advances on different fronts ranging
from SLAM to semantics, building an actionable hierarchical semantic
representation of urban dynamic scenes from multiple agents is still a
challenging problem. In this work, we present Collaborative URBan Scene Graphs
(CURB-SG) that enable higher-order reasoning and efficient querying for many
functions of automated driving. CURB-SG leverages panoptic LiDAR data from
multiple agents to build large-scale maps using an effective graph-based
collaborative SLAM approach that detects inter-agent loop closures. To
semantically decompose the obtained 3D map, we build a lane graph from the
paths of ego agents and their panoptic observations of other vehicles. Based on
the connectivity of the lane graph, we segregate the environment into
intersecting and non-intersecting road areas. Subsequently, we construct a
multi-layered scene graph that includes lane information, the position of
static landmarks and their assignment to certain map sections, other vehicles
observed by the ego agents, and the pose graph from SLAM including 3D panoptic
point clouds. We extensively evaluate CURB-SG in urban scenarios using a
photorealistic simulator. We release our code at
http://curb.cs.uni-freiburg.de.Comment: Refined manuscript and extended supplementar
VMA: Divide-and-Conquer Vectorized Map Annotation System for Large-Scale Driving Scene
High-definition (HD) map serves as the essential infrastructure of autonomous
driving. In this work, we build up a systematic vectorized map annotation
framework (termed VMA) for efficiently generating HD map of large-scale driving
scene. We design a divide-and-conquer annotation scheme to solve the spatial
extensibility problem of HD map generation, and abstract map elements with a
variety of geometric patterns as unified point sequence representation, which
can be extended to most map elements in the driving scene. VMA is highly
efficient and extensible, requiring negligible human effort, and flexible in
terms of spatial scale and element type. We quantitatively and qualitatively
validate the annotation performance on real-world urban and highway scenes, as
well as NYC Planimetric Database. VMA can significantly improve map generation
efficiency and require little human effort. On average VMA takes 160min for
annotating a scene with a range of hundreds of meters, and reduces 52.3% of the
human cost, showing great application value
- …