2,902 research outputs found
Semantic Visual Localization
Robust visual localization under a wide range of viewing conditions is a
fundamental problem in computer vision. Handling the difficult cases of this
problem is not only very challenging but also of high practical relevance,
e.g., in the context of life-long localization for augmented reality or
autonomous robots. In this paper, we propose a novel approach based on a joint
3D geometric and semantic understanding of the world, enabling it to succeed
under conditions where previous approaches failed. Our method leverages a novel
generative model for descriptor learning, trained on semantic scene completion
as an auxiliary task. The resulting 3D descriptors are robust to missing
observations by encoding high-level 3D geometric and semantic information.
Experiments on several challenging large-scale localization datasets
demonstrate reliable localization under extreme viewpoint, illumination, and
geometry changes
GASP : Geometric Association with Surface Patches
A fundamental challenge to sensory processing tasks in perception and
robotics is the problem of obtaining data associations across views. We present
a robust solution for ascertaining potentially dense surface patch (superpixel)
associations, requiring just range information. Our approach involves
decomposition of a view into regularized surface patches. We represent them as
sequences expressing geometry invariantly over their superpixel neighborhoods,
as uniquely consistent partial orderings. We match these representations
through an optimal sequence comparison metric based on the Damerau-Levenshtein
distance - enabling robust association with quadratic complexity (in contrast
to hitherto employed joint matching formulations which are NP-complete). The
approach is able to perform under wide baselines, heavy rotations, partial
overlaps, significant occlusions and sensor noise.
The technique does not require any priors -- motion or otherwise, and does
not make restrictive assumptions on scene structure and sensor movement. It
does not require appearance -- is hence more widely applicable than appearance
reliant methods, and invulnerable to related ambiguities such as textureless or
aliased content. We present promising qualitative and quantitative results
under diverse settings, along with comparatives with popular approaches based
on range as well as RGB-D data.Comment: International Conference on 3D Vision, 201
Close range mini Uavs photogrammetry for architecture survey
The survey of historical façades contains several bottlenecks, mainly related to the geometrical structure, the decorative framework, the presence of natural or artificial obstacles, the environment limitations. Urban context presents additional restrictions, binding by ground acquisition activity and leading to building data loss. The integration of TLS and close-range photogrammetry allows to go over such stuff, not overcoming the shadows effect due to the ground point of view. In the last year the massive use of UAVs in survey activity has permitted to enlarge survey capabilities, reaching a deeper knowledge in the architecture analysis. In the meanwhile, several behaviour rules have been introduced in different countries, regulating the UAVs use in different field, strongly restricting their application in urban areas. Recently very small and light platforms have been presented, which can partially overcome these rules restrictions, opening to very interesting future scenarios. This article presents the application of one of these very small RPAS (less than 300 g), equipped with a low-cost camera, in a close range photogrammetric survey of an historical building façade in Bologna (Italy). The suggested analysis tries to point out the system accuracy and details acquisition capacity. The final aim of the paper is to validate the application of this new platform in an architectonic survey pipeline, widening the future application of close-range photogrammetry in the architecture acquisition process
Understanding the Limitations of CNN-based Absolute Camera Pose Regression
Visual localization is the task of accurate camera pose estimation in a known
scene. It is a key problem in computer vision and robotics, with applications
including self-driving cars, Structure-from-Motion, SLAM, and Mixed Reality.
Traditionally, the localization problem has been tackled using 3D geometry.
Recently, end-to-end approaches based on convolutional neural networks have
become popular. These methods learn to directly regress the camera pose from an
input image. However, they do not achieve the same level of pose accuracy as 3D
structure-based methods. To understand this behavior, we develop a theoretical
model for camera pose regression. We use our model to predict failure cases for
pose regression techniques and verify our predictions through experiments. We
furthermore use our model to show that pose regression is more closely related
to pose approximation via image retrieval than to accurate pose estimation via
3D structure. A key result is that current approaches do not consistently
outperform a handcrafted image retrieval baseline. This clearly shows that
additional research is needed before pose regression algorithms are ready to
compete with structure-based methods.Comment: Initial version of a paper accepted to CVPR 201
Cluster-Wise Ratio Tests for Fast Camera Localization
Feature point matching for camera localization suffers from scalability
problems. Even when feature descriptors associated with 3D scene points are
locally unique, as coverage grows, similar or repeated features become
increasingly common. As a result, the standard distance ratio-test used to
identify reliable image feature points is overly restrictive and rejects many
good candidate matches. We propose a simple coarse-to-fine strategy that uses
conservative approximations to robust local ratio-tests that can be computed
efficiently using global approximate k-nearest neighbor search. We treat these
forward matches as votes in camera pose space and use them to prioritize
back-matching within candidate camera pose clusters, exploiting feature
co-visibility captured by clustering the 3D model camera pose graph. This
approach achieves state-of-the-art camera localization results on a variety of
popular benchmarks, outperforming several methods that use more complicated
data structures and that make more restrictive assumptions on camera pose. We
also carry out diagnostic analyses on a difficult test dataset containing
globally repetitive structure that suggest our approach successfully adapts to
the challenges of large-scale image localization
Learning to Find Good Correspondences
We develop a deep architecture to learn to find good correspondences for
wide-baseline stereo. Given a set of putative sparse matches and the camera
intrinsics, we train our network in an end-to-end fashion to label the
correspondences as inliers or outliers, while simultaneously using them to
recover the relative pose, as encoded by the essential matrix. Our architecture
is based on a multi-layer perceptron operating on pixel coordinates rather than
directly on the image, and is thus simple and small. We introduce a novel
normalization technique, called Context Normalization, which allows us to
process each data point separately while imbuing it with global information,
and also makes the network invariant to the order of the correspondences. Our
experiments on multiple challenging datasets demonstrate that our method is
able to drastically improve the state of the art with little training data.Comment: CVPR 2018 (Oral
Recommended from our members
Potential of Balloon Photogrammetry for Spatially Continuous Snow Depth Measurements
- âŠ