Advances in deep learning have resulted in steady progress in computer vision
with improved accuracy on tasks such as object detection and semantic
segmentation. Nevertheless, deep neural networks are vulnerable to adversarial
attacks, thus presenting a challenge in reliable deployment. Two of the
prominent tasks in 3D scene-understanding for robotics and advanced drive
assistance systems are monocular depth and pose estimation, often learned
together in an unsupervised manner. While studies evaluating the impact of
adversarial attacks on monocular depth estimation exist, a systematic
demonstration and analysis of adversarial perturbations against pose estimation
are lacking. We show how additive imperceptible perturbations can not only
change predictions to increase the trajectory drift but also catastrophically
alter its geometry. We also study the relation between adversarial
perturbations targeting monocular depth and pose estimation networks, as well
as the transferability of perturbations to other networks with different
architectures and losses. Our experiments show how the generated perturbations
lead to notable errors in relative rotation and translation predictions and
elucidate vulnerabilities of the networks.Comment: Accepted at the 2022 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS 2022