3,881 research outputs found
3D Registration of Aerial and Ground Robots for Disaster Response: An Evaluation of Features, Descriptors, and Transformation Estimation
Global registration of heterogeneous ground and aerial mapping data is a
challenging task. This is especially difficult in disaster response scenarios
when we have no prior information on the environment and cannot assume the
regular order of man-made environments or meaningful semantic cues. In this
work we extensively evaluate different approaches to globally register UGV
generated 3D point-cloud data from LiDAR sensors with UAV generated point-cloud
maps from vision sensors. The approaches are realizations of different
selections for: a) local features: key-points or segments; b) descriptors:
FPFH, SHOT, or ESF; and c) transformation estimations: RANSAC or FGR.
Additionally, we compare the results against standard approaches like applying
ICP after a good prior transformation has been given. The evaluation criteria
include the distance which a UGV needs to travel to successfully localize, the
registration error, and the computational cost. In this context, we report our
findings on effectively performing the task on two new Search and Rescue
datasets. Our results have the potential to help the community take informed
decisions when registering point-cloud maps from ground robots to those from
aerial robots.Comment: Awarded Best Paper at the 15th IEEE International Symposium on
Safety, Security, and Rescue Robotics 2017 (SSRR 2017
MOMA: Visual Mobile Marker Odometry
In this paper, we present a cooperative odometry scheme based on the
detection of mobile markers in line with the idea of cooperative positioning
for multiple robots [1]. To this end, we introduce a simple optimization scheme
that realizes visual mobile marker odometry via accurate fixed marker-based
camera positioning and analyse the characteristics of errors inherent to the
method compared to classical fixed marker-based navigation and visual odometry.
In addition, we provide a specific UAV-UGV configuration that allows for
continuous movements of the UAV without doing stops and a minimal
caterpillar-like configuration that works with one UGV alone. Finally, we
present a real-world implementation and evaluation for the proposed UAV-UGV
configuration
Asynchronous Collaborative Autoscanning with Mode Switching for Multi-Robot Scene Reconstruction
When conducting autonomous scanning for the online reconstruction of unknown
indoor environments, robots have to be competent at exploring scene structure
and reconstructing objects with high quality. Our key observation is that
different tasks demand specialized scanning properties of robots: rapid moving
speed and far vision for global exploration and slow moving speed and narrow
vision for local object reconstruction, which are referred as two different
scanning modes: explorer and reconstructor, respectively. When requiring
multiple robots to collaborate for efficient exploration and fine-grained
reconstruction, the questions on when to generate and how to assign those tasks
should be carefully answered. Therefore, we propose a novel asynchronous
collaborative autoscanning method with mode switching, which generates two
kinds of scanning tasks with associated scanning modes, i.e., exploration task
with explorer mode and reconstruction task with reconstructor mode, and assign
them to the robots to execute in an asynchronous collaborative manner to highly
boost the scanning efficiency and reconstruction quality. The task assignment
is optimized by solving a modified Multi-Depot Multiple Traveling Salesman
Problem (MDMTSP). Moreover, to further enhance the collaboration and increase
the efficiency, we propose a task-flow model that actives the task generation
and assignment process immediately when any of the robots finish all its tasks
with no need to wait for all other robots to complete the tasks assigned in the
previous iteration. Extensive experiments have been conducted to show the
importance of each key component of our method and the superiority over
previous methods in scanning efficiency and reconstruction quality.Comment: 13pages, 12 figures, Conference: SIGGRAPH Asia 202
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
Kimera-Multi: Robust, Distributed, Dense Metric-Semantic SLAM for Multi-Robot Systems
This paper presents Kimera-Multi, the first multi-robot system that (i) is
robust and capable of identifying and rejecting incorrect inter and intra-robot
loop closures resulting from perceptual aliasing, (ii) is fully distributed and
only relies on local (peer-to-peer) communication to achieve distributed
localization and mapping, and (iii) builds a globally consistent
metric-semantic 3D mesh model of the environment in real-time, where faces of
the mesh are annotated with semantic labels. Kimera-Multi is implemented by a
team of robots equipped with visual-inertial sensors. Each robot builds a local
trajectory estimate and a local mesh using Kimera. When communication is
available, robots initiate a distributed place recognition and robust pose
graph optimization protocol based on a novel distributed graduated
non-convexity algorithm. The proposed protocol allows the robots to improve
their local trajectory estimates by leveraging inter-robot loop closures while
being robust to outliers. Finally, each robot uses its improved trajectory
estimate to correct the local mesh using mesh deformation techniques.
We demonstrate Kimera-Multi in photo-realistic simulations, SLAM benchmarking
datasets, and challenging outdoor datasets collected using ground robots. Both
real and simulated experiments involve long trajectories (e.g., up to 800
meters per robot). The experiments show that Kimera-Multi (i) outperforms the
state of the art in terms of robustness and accuracy, (ii) achieves estimation
errors comparable to a centralized SLAM system while being fully distributed,
(iii) is parsimonious in terms of communication bandwidth, (iv) produces
accurate metric-semantic 3D meshes, and (v) is modular and can be also used for
standard 3D reconstruction (i.e., without semantic labels) or for trajectory
estimation (i.e., without reconstructing a 3D mesh).Comment: Accepted by IEEE Transactions on Robotics (18 pages, 15 figures
Towards Dense Collaborative Mapping using RGBD Sensors
Development of collaborative, perception driven autonomous systems requires the ability for collaborators to compute a rich, shared representation of the environment, and their place in it, in real-time. Using this shared representation, collaborators can communicate geometric, semantic and dynamic information about the environment across frames of reference to one another. Existing state-of-the art dense mapping systems provide a good starting point for developing a collaborative mapping system, however, no system currently covers collaborative mapping directly. In this paper, we introduce our approach to dense collaborative map-ping, offering an introduction to the problem, a discussion of the key challenges involved in developing such a system and an analysis of preliminary results
Towards Dense Collaborative Mapping using RGBD Sensors
Development of collaborative, perception driven autonomous systems requires the ability for collaborators to compute a rich, shared representation of the environment, and their place in it, in real-time. Using this shared representation, collaborators can communicate geometric, semantic and dynamic information about the environment across frames of reference to one another. Existing state-of-the art dense mapping systems provide a good starting point for developing a collaborative mapping system, however, no system currently covers collaborative mapping directly. In this paper, we introduce our approach to dense collaborative map-ping, offering an introduction to the problem, a discussion of the key challenges involved in developing such a system and an analysis of preliminary results
Predicting the Next Best View for 3D Mesh Refinement
3D reconstruction is a core task in many applications such as robot
navigation or sites inspections. Finding the best poses to capture part of the
scene is one of the most challenging topic that goes under the name of Next
Best View. Recently, many volumetric methods have been proposed; they choose
the Next Best View by reasoning over a 3D voxelized space and by finding which
pose minimizes the uncertainty decoded into the voxels. Such methods are
effective, but they do not scale well since the underlaying representation
requires a huge amount of memory. In this paper we propose a novel mesh-based
approach which focuses on the worst reconstructed region of the environment
mesh. We define a photo-consistent index to evaluate the 3D mesh accuracy, and
an energy function over the worst regions of the mesh which takes into account
the mutual parallax with respect to the previous cameras, the angle of
incidence of the viewing ray to the surface and the visibility of the region.
We test our approach over a well known dataset and achieve state-of-the-art
results.Comment: 13 pages, 5 figures, to be published in IAS-1
- …