The ability to efficiently utilize crowdsourced visual data carries immense
potential for the domains of large scale dynamic mapping and autonomous
driving. However, state-of-the-art methods for crowdsourced 3D mapping assume
prior knowledge of camera intrinsics. In this work, we propose a framework that
estimates the 3D positions of semantically meaningful landmarks such as traffic
signs without assuming known camera intrinsics, using only monocular color
camera and GPS. We utilize multi-view geometry as well as deep learning based
self-calibration, depth, and ego-motion estimation for traffic sign
positioning, and show that combining their strengths is important for
increasing the map coverage. To facilitate research on this task, we construct
and make available a KITTI based 3D traffic sign ground truth positioning
dataset. Using our proposed framework, we achieve an average single-journey
relative and absolute positioning accuracy of 39cm and 1.26m respectively, on
this dataset.Comment: Accepted at 2020 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS