2,638 research outputs found
Regularized pointwise map recovery from functional correspondence
The concept of using functional maps for representing dense correspondences between deformable shapes has proven to be extremely effective in many applications. However, despite the impact of this framework, the problem of recovering the point-to-point correspondence from a given functional map has received surprisingly little interest. In this paper, we analyse the aforementioned problem and propose a novel method for reconstructing pointwise correspondences from a given functional map. The proposed algorithm phrases the matching problem as a regularized alignment problem of the spectral embeddings of the two shapes. Opposed to established methods, our approach does not require the input shapes to be nearly-isometric, and easily extends to recovering the point-to-point correspondence in part-to-whole shape matching problems. Our numerical experiments demonstrate that the proposed approach leads to a significant improvement in accuracy in several challenging cases
SPLODE: Semi-Probabilistic Point and Line Odometry with Depth Estimation from RGB-D Camera Motion
Active depth cameras suffer from several limitations, which cause incomplete
and noisy depth maps, and may consequently affect the performance of RGB-D
Odometry. To address this issue, this paper presents a visual odometry method
based on point and line features that leverages both measurements from a depth
sensor and depth estimates from camera motion. Depth estimates are generated
continuously by a probabilistic depth estimation framework for both types of
features to compensate for the lack of depth measurements and inaccurate
feature depth associations. The framework models explicitly the uncertainty of
triangulating depth from both point and line observations to validate and
obtain precise estimates. Furthermore, depth measurements are exploited by
propagating them through a depth map registration module and using a
frame-to-frame motion estimation method that considers 3D-to-2D and 2D-to-3D
reprojection errors, independently. Results on RGB-D sequences captured on
large indoor and outdoor scenes, where depth sensor limitations are critical,
show that the combination of depth measurements and estimates through our
approach is able to overcome the absence and inaccuracy of depth measurements.Comment: IROS 201
Elastic Registration of Geodesic Vascular Graphs
Vascular graphs can embed a number of high-level features, from morphological
parameters, to functional biomarkers, and represent an invaluable tool for
longitudinal and cross-sectional clinical inference. This, however, is only
feasible when graphs are co-registered together, allowing coherent multiple
comparisons. The robust registration of vascular topologies stands therefore as
key enabling technology for group-wise analyses. In this work, we present an
end-to-end vascular graph registration approach, that aligns networks with
non-linear geometries and topological deformations, by introducing a novel
overconnected geodesic vascular graph formulation, and without enforcing any
anatomical prior constraint. The 3D elastic graph registration is then
performed with state-of-the-art graph matching methods used in computer vision.
Promising results of vascular matching are found using graphs from synthetic
and real angiographies. Observations and future designs are discussed towards
potential clinical applications
Semantic Visual Localization
Robust visual localization under a wide range of viewing conditions is a
fundamental problem in computer vision. Handling the difficult cases of this
problem is not only very challenging but also of high practical relevance,
e.g., in the context of life-long localization for augmented reality or
autonomous robots. In this paper, we propose a novel approach based on a joint
3D geometric and semantic understanding of the world, enabling it to succeed
under conditions where previous approaches failed. Our method leverages a novel
generative model for descriptor learning, trained on semantic scene completion
as an auxiliary task. The resulting 3D descriptors are robust to missing
observations by encoding high-level 3D geometric and semantic information.
Experiments on several challenging large-scale localization datasets
demonstrate reliable localization under extreme viewpoint, illumination, and
geometry changes
AgriColMap: Aerial-Ground Collaborative 3D Mapping for Precision Farming
The combination of aerial survey capabilities of Unmanned Aerial Vehicles
with targeted intervention abilities of agricultural Unmanned Ground Vehicles
can significantly improve the effectiveness of robotic systems applied to
precision agriculture. In this context, building and updating a common map of
the field is an essential but challenging task. The maps built using robots of
different types show differences in size, resolution and scale, the associated
geolocation data may be inaccurate and biased, while the repetitiveness of both
visual appearance and geometric structures found within agricultural contexts
render classical map merging techniques ineffective. In this paper we propose
AgriColMap, a novel map registration pipeline that leverages a grid-based
multimodal environment representation which includes a vegetation index map and
a Digital Surface Model. We cast the data association problem between maps
built from UAVs and UGVs as a multimodal, large displacement dense optical flow
estimation. The dominant, coherent flows, selected using a voting scheme, are
used as point-to-point correspondences to infer a preliminary non-rigid
alignment between the maps. A final refinement is then performed, by exploiting
only meaningful parts of the registered maps. We evaluate our system using real
world data for 3 fields with different crop species. The results show that our
method outperforms several state of the art map registration and matching
techniques by a large margin, and has a higher tolerance to large initial
misalignments. We release an implementation of the proposed approach along with
the acquired datasets with this paper.Comment: Published in IEEE Robotics and Automation Letters, 201
- …