26 research outputs found
ImPosing: Implicit Pose Encoding for Efficient Visual Localization
We propose a novel learning-based formulation for visual localization of
vehicles that can operate in real-time in city-scale environments. Visual
localization algorithms determine the position and orientation from which an
image has been captured, using a set of geo-referenced images or a 3D scene
representation. Our new localization paradigm, named Implicit Pose Encoding
(ImPosing), embeds images and camera poses into a common latent representation
with 2 separate neural networks, such that we can compute a similarity score
for each image-pose pair. By evaluating candidates through the latent space in
a hierarchical manner, the camera position and orientation are not directly
regressed but incrementally refined. Very large environments force competitors
to store gigabytes of map data, whereas our method is very compact
independently of the reference database size. In this paper, we describe how to
effectively optimize our learned modules, how to combine them to achieve
real-time localization, and demonstrate results on diverse large scale
scenarios that significantly outperform prior work in accuracy and
computational efficiency.Comment: Accepted at WACV 202
A Probabilistic Rotation Representation for Symmetric Shapes With an Efficiently Computable Bingham Loss Function
In recent years, a deep learning framework has been widely used for object
pose estimation. While quaternion is a common choice for rotation
representation, it cannot represent the ambiguity of the observation. In order
to handle the ambiguity, the Bingham distribution is one promising solution.
However, it requires complicated calculation when yielding the negative
log-likelihood (NLL) loss. An alternative easy-to-implement loss function has
been proposed to avoid complex computations but has difficulty expressing
symmetric distribution. In this paper, we introduce a fast-computable and
easy-to-implement NLL loss function for Bingham distribution. We also create
the inference network and show that our loss function can capture the symmetric
property of target objects from their point clouds.Comment: This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible. arXiv admin note: substantial text overlap with
arXiv:2203.0445
RelPose: Predicting Probabilistic Relative Rotation for Single Objects in the Wild
We describe a data-driven method for inferring the camera viewpoints given
multiple images of an arbitrary object. This task is a core component of
classic geometric pipelines such as SfM and SLAM, and also serves as a vital
pre-processing requirement for contemporary neural approaches (e.g. NeRF) to
object reconstruction and view synthesis. In contrast to existing
correspondence-driven methods that do not perform well given sparse views, we
propose a top-down prediction based approach for estimating camera viewpoints.
Our key technical insight is the use of an energy-based formulation for
representing distributions over relative camera rotations, thus allowing us to
explicitly represent multiple camera modes arising from object symmetries or
views. Leveraging these relative predictions, we jointly estimate a consistent
set of camera rotations from multiple images. We show that our approach
outperforms state-of-the-art SfM and SLAM methods given sparse images on both
seen and unseen categories. Further, our probabilistic approach significantly
outperforms directly regressing relative poses, suggesting that modeling
multimodality is important for coherent joint reconstruction. We demonstrate
that our system can be a stepping stone toward in-the-wild reconstruction from
multi-view datasets. The project page with code and videos can be found at
https://jasonyzhang.com/relpose.Comment: In ECCV 2022. V2: updated reference
Benchmarking Visual-Inertial Deep Multimodal Fusion for Relative Pose Regression and Odometry-aided Absolute Pose Regression
Visual-inertial localization is a key problem in computer vision and robotics
applications such as virtual reality, self-driving cars, and aerial vehicles.
The goal is to estimate an accurate pose of an object when either the
environment or the dynamics are known. Recent methods directly regress the pose
using convolutional and spatio-temporal networks. Absolute pose regression
(APR) techniques predict the absolute camera pose from an image input in a
known scene. Odometry methods perform relative pose regression (RPR) that
predicts the relative pose from a known object dynamic (visual or inertial
inputs). The localization task can be improved by retrieving information of
both data sources for a cross-modal setup, which is a challenging problem due
to contradictory tasks. In this work, we conduct a benchmark to evaluate deep
multimodal fusion based on PGO and attention networks. Auxiliary and Bayesian
learning are integrated for the APR task. We show accuracy improvements for the
RPR-aided APR task and for the RPR-RPR task for aerial vehicles and hand-held
devices. We conduct experiments on the EuRoC MAV and PennCOSYVIO datasets, and
record a novel industry dataset.Comment: Under revie
Implicit-PDF: Non-Parametric Representation of Probability Distributions on the Rotation Manifold
Single image pose estimation is a fundamental problem in many vision and
robotics tasks, and existing deep learning approaches suffer by not completely
modeling and handling: i) uncertainty about the predictions, and ii) symmetric
objects with multiple (sometimes infinite) correct poses. To this end, we
introduce a method to estimate arbitrary, non-parametric distributions on
SO(3). Our key idea is to represent the distributions implicitly, with a neural
network that estimates the probability given the input image and a candidate
pose. Grid sampling or gradient ascent can be used to find the most likely
pose, but it is also possible to evaluate the probability at any pose, enabling
reasoning about symmetries and uncertainty. This is the most general way of
representing distributions on manifolds, and to showcase the rich expressive
power, we introduce a dataset of challenging symmetric and nearly-symmetric
objects. We require no supervision on pose uncertainty -- the model trains only
with a single pose per example. Nonetheless, our implicit model is highly
expressive to handle complex distributions over 3D poses, while still obtaining
accurate pose estimation on standard non-ambiguous environments, achieving
state-of-the-art performance on Pascal3D+ and ModelNet10-SO(3) benchmarks
Learning to Predict Dense Correspondences for 6D Pose Estimation
Object pose estimation is an important problem in computer vision with applications in robotics, augmented reality and many other areas. An established strategy for object pose estimation consists of, firstly, finding correspondences between the image and the object’s reference frame, and, secondly, estimating the pose from outlier-free correspondences using Random Sample Consensus (RANSAC). The first step, namely finding correspondences, is difficult because object appearance varies depending on perspective, lighting and many other factors. Traditionally, correspondences have been established using handcrafted methods like sparse feature pipelines.
In this thesis, we introduce a dense correspondence representation for objects, called object coordinates, which can be learned. By learning object coordinates, our pose estimation pipeline adapts to various aspects of the task at hand. It works well for diverse object types, from small objects to entire rooms, varying object attributes, like textured or texture-less objects, and different input modalities, like RGB-D or RGB images. The concept of object coordinates allows us to easily model and exploit uncertainty as part of the pipeline such that even repeating structures or areas with little texture can contribute to a good solution. Although we can train object coordinate predictors independent of the full pipeline and achieve good results, training the pipeline in an end-to-end fashion is desirable. It enables the object coordinate predictor to adapt its output to the specificities of following steps in the pose estimation pipeline. Unfortunately, the RANSAC component of the pipeline is non-differentiable which prohibits end-to-end training. Adopting techniques from reinforcement learning, we introduce Differentiable Sample Consensus (DSAC), a formulation of RANSAC which allows us to train the pose estimation pipeline in an end-to-end fashion by minimizing the expectation of the final pose error