1 research outputs found
CityRefer: Geography-aware 3D Visual Grounding Dataset on City-scale Point Cloud Data
City-scale 3D point cloud is a promising way to express detailed and
complicated outdoor structures. It encompasses both the appearance and geometry
features of segmented city components, including cars, streets, and buildings,
that can be utilized for attractive applications such as user-interactive
navigation of autonomous vehicles and drones. However, compared to the
extensive text annotations available for images and indoor scenes, the scarcity
of text annotations for outdoor scenes poses a significant challenge for
achieving these applications. To tackle this problem, we introduce the
CityRefer dataset for city-level visual grounding. The dataset consists of 35k
natural language descriptions of 3D objects appearing in SensatUrban city
scenes and 5k landmarks labels synchronizing with OpenStreetMap. To ensure the
quality and accuracy of the dataset, all descriptions and labels in the
CityRefer dataset are manually verified. We also have developed a baseline
system that can learn encoded language descriptions, 3D object instances, and
geographical information about the city's landmarks to perform visual grounding
on the CityRefer dataset. To the best of our knowledge, the CityRefer dataset
is the largest city-level visual grounding dataset for localizing specific 3D
objects.Comment: NeurIPS D&B 2023. The first two authors are equally contribute