313 research outputs found
Augmenting Visual Place Recognition with Structural Cues
In this paper, we propose to augment image-based place recognition with
structural cues. Specifically, these structural cues are obtained using
structure-from-motion, such that no additional sensors are needed for place
recognition. This is achieved by augmenting the 2D convolutional neural network
(CNN) typically used for image-based place recognition with a 3D CNN that takes
as input a voxel grid derived from the structure-from-motion point cloud. We
evaluate different methods for fusing the 2D and 3D features and obtain best
performance with global average pooling and simple concatenation. On the Oxford
RobotCar dataset, the resulting descriptor exhibits superior recognition
performance compared to descriptors extracted from only one of the input
modalities, including state-of-the-art image-based descriptors. Especially at
low descriptor dimensionalities, we outperform state-of-the-art descriptors by
up to 90%.Comment: 8 pages, published in RA-L & IROS 202
DELTAS: Depth Estimation by Learning Triangulation And densification of Sparse points
Multi-view stereo (MVS) is the golden mean between the accuracy of active
depth sensing and the practicality of monocular depth estimation. Cost volume
based approaches employing 3D convolutional neural networks (CNNs) have
considerably improved the accuracy of MVS systems. However, this accuracy comes
at a high computational cost which impedes practical adoption. Distinct from
cost volume approaches, we propose an efficient depth estimation approach by
first (a) detecting and evaluating descriptors for interest points, then (b)
learning to match and triangulate a small set of interest points, and finally
(c) densifying this sparse set of 3D points using CNNs. An end-to-end network
efficiently performs all three steps within a deep learning framework and
trained with intermediate 2D image and 3D geometric supervision, along with
depth supervision. Crucially, our first step complements pose estimation using
interest point detection and descriptor learning. We demonstrate
state-of-the-art results on depth estimation with lower compute for different
scene lengths. Furthermore, our method generalizes to newer environments and
the descriptors output by our network compare favorably to strong baselines.
Code is available at https://github.com/magicleap/DELTASComment: ECCV 202
LiDAR-Based Place Recognition For Autonomous Driving: A Survey
LiDAR-based place recognition (LPR) plays a pivotal role in autonomous
driving, which assists Simultaneous Localization and Mapping (SLAM) systems in
reducing accumulated errors and achieving reliable localization. However,
existing reviews predominantly concentrate on visual place recognition (VPR)
methods. Despite the recent remarkable progress in LPR, to the best of our
knowledge, there is no dedicated systematic review in this area. This paper
bridges the gap by providing a comprehensive review of place recognition
methods employing LiDAR sensors, thus facilitating and encouraging further
research. We commence by delving into the problem formulation of place
recognition, exploring existing challenges, and describing relations to
previous surveys. Subsequently, we conduct an in-depth review of related
research, which offers detailed classifications, strengths and weaknesses, and
architectures. Finally, we summarize existing datasets, commonly used
evaluation metrics, and comprehensive evaluation results from various methods
on public datasets. This paper can serve as a valuable tutorial for newcomers
entering the field of place recognition and for researchers interested in
long-term robot localization. We pledge to maintain an up-to-date project on
our website https://github.com/ShiPC-AI/LPR-Survey.Comment: 26 pages,13 figures, 5 table
Parallel Tracking and Mapping for Manipulation Applications with Golem Krang
Implementing a simultaneous localization and mapping system and an image semantic segmentation method on a mobile manipulation. The application of the SLAM is working towards navigating among obstacles in unknown environments. The object detection method will be integrated for future manipulation tasks such as grasping. This work will be demonstrated on a real robotics hardware system in the lab.Outgoin
Semantic Visual Localization
Robust visual localization under a wide range of viewing conditions is a
fundamental problem in computer vision. Handling the difficult cases of this
problem is not only very challenging but also of high practical relevance,
e.g., in the context of life-long localization for augmented reality or
autonomous robots. In this paper, we propose a novel approach based on a joint
3D geometric and semantic understanding of the world, enabling it to succeed
under conditions where previous approaches failed. Our method leverages a novel
generative model for descriptor learning, trained on semantic scene completion
as an auxiliary task. The resulting 3D descriptors are robust to missing
observations by encoding high-level 3D geometric and semantic information.
Experiments on several challenging large-scale localization datasets
demonstrate reliable localization under extreme viewpoint, illumination, and
geometry changes
A Survey on Global LiDAR Localization
Knowledge about the own pose is key for all mobile robot applications. Thus
pose estimation is part of the core functionalities of mobile robots. In the
last two decades, LiDAR scanners have become a standard sensor for robot
localization and mapping. This article surveys recent progress and advances in
LiDAR-based global localization. We start with the problem formulation and
explore the application scope. We then present the methodology review covering
various global localization topics, such as maps, descriptor extraction, and
consistency checks. The contents are organized under three themes. The first is
the combination of global place retrieval and local pose estimation. Then the
second theme is upgrading single-shot measurement to sequential ones for
sequential global localization. The third theme is extending single-robot
global localization to cross-robot localization on multi-robot systems. We end
this survey with a discussion of open challenges and promising directions on
global lidar localization
- …