3,185 research outputs found
An evaluation of 2D SLAM techniques available in Robot Operating System
n this work, a study of several laser-based 2D Simultaneous Localization and Mapping (SLAM) techniques available in Robot Operating System (ROS) is conducted. All the approaches have been evaluated and compared in 2D simulations and real world experiments. In order to draw conclusions on the performance of the tested techniques, the experimental results were collected under the same conditions and a generalized performance metric based on the k-nearest neighbours concept was applied. Moreover, the CPU load of each technique is examined. This work provides insight on the weaknesses and strengths of each solution. Such analysis is fundamental to decide which solution to adopt according to the properties of the intended final application
Benchmarking and Comparing Popular Visual SLAM Algorithms
This paper contains the performance analysis and benchmarking of two popular
visual SLAM Algorithms: RGBD-SLAM and RTABMap. The dataset used for the
analysis is the TUM RGBD Dataset from the Computer Vision Group at TUM. The
dataset selected has a large set of image sequences from a Microsoft Kinect
RGB-D sensor with highly accurate and time-synchronized ground truth poses from
a motion capture system. The test sequences selected depict a variety of
problems and camera motions faced by Simultaneous Localization and Mapping
(SLAM) algorithms for the purpose of testing the robustness of the algorithms
in different situations. The evaluation metrics used for the comparison are
Absolute Trajectory Error (ATE) and Relative Pose Error (RPE). The analysis
involves comparing the Root Mean Square Error (RMSE) of the two metrics and the
processing time for each algorithm. This paper serves as an important aid in
the selection of SLAM algorithm for different scenes and camera motions. The
analysis helps to realize the limitations of both SLAM methods. This paper also
points out some underlying flaws in the used evaluation metrics.Comment: 7 pages, 4 figure
Autonomous Robot Navigation with Rich Information Mapping in Nuclear Storage Environments
This paper presents our approach to develop a method for an unmanned ground
vehicle (UGV) to perform inspection tasks in nuclear environments using rich
information maps. To reduce inspectors' exposure to elevated radiation levels,
an autonomous navigation framework for the UGV has been developed to perform
routine inspections such as counting containers, recording their ID tags and
performing gamma measurements on some of them. In order to achieve autonomy, a
rich information map is generated which includes not only the 2D global cost
map consisting of obstacle locations for path planning, but also the location
and orientation information for the objects of interest from the inspector's
perspective. The UGV's autonomy framework utilizes this information to
prioritize locations to navigate to perform the inspections. In this paper, we
present our method of generating this rich information map, originally
developed to meet the requirements of the International Atomic Energy Agency
(IAEA) Robotics Challenge. We demonstrate the performance of our method in a
simulated testbed environment containing uranium hexafluoride (UF6) storage
container mock ups
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
Real-time Monocular Object SLAM
We present a real-time object-based SLAM system that leverages the largest
object database to date. Our approach comprises two main components: 1) a
monocular SLAM algorithm that exploits object rigidity constraints to improve
the map and find its real scale, and 2) a novel object recognition algorithm
based on bags of binary words, which provides live detections with a database
of 500 3D objects. The two components work together and benefit each other: the
SLAM algorithm accumulates information from the observations of the objects,
anchors object features to especial map landmarks and sets constrains on the
optimization. At the same time, objects partially or fully located within the
map are used as a prior to guide the recognition algorithm, achieving higher
recall. We evaluate our proposal on five real environments showing improvements
on the accuracy of the map and efficiency with respect to other
state-of-the-art techniques
- …