91 research outputs found
A Review of Visual-LiDAR Fusion based Simultaneous Localization and Mapping
Autonomous navigation requires both a precise and robust mapping and localization solution. In this context, Simultaneous Localization and Mapping (SLAM) is a very well-suited solution. SLAM is used for many applications including mobile robotics, self-driving cars, unmanned aerial vehicles, or autonomous underwater vehicles. In these domains, both visual and visual-IMU SLAM are well studied, and improvements are regularly proposed in the literature. However, LiDAR-SLAM techniques seem to be relatively the same as ten or twenty years ago. Moreover, few research works focus on vision-LiDAR approaches, whereas such a fusion would have many advantages. Indeed, hybridized solutions offer improvements in the performance of SLAM, especially with respect to aggressive motion, lack of light, or lack of visual features. This study provides a comprehensive survey on visual-LiDAR SLAM. After a summary of the basic idea of SLAM and its implementation, we give a complete review of the state-of-the-art of SLAM research, focusing on solutions using vision, LiDAR, and a sensor fusion of both modalities
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
МЕТОД ОДНОЧАСНОЇ ЛОКАЛІЗАЦІЇ ТА КАРТОГРАФУВАННЯ ДЛЯ ПОБУДОВИ 2,5D-КАРТИ НАВКОЛИШНЬОГО СЕРЕДОВИЩА ЗАСОБАМИ ROS
Метод SLAM (одночасної локалізації та картографування) на сьогодні є актуальною темою для досліджень і розвитку в галузі робототехніки та комп’ютерного зору. SLAM широко застосовується в різних сферах, зокрема автономної навігації інтелектуальних роботів. З допомогою цього методу розв’язуються проблеми в розширеній і віртуальній реальності, БПЛА та інших систем. За останні роки SLAM здобув значні досягнення завдяки поступовому розвитку його алгоритмів, використанню новітніх датчиків, а також покращенню обчислювальної потужності комп’ютерів. Предметом дослідження є сучасні методи одночасної локалізації та картографування в режимі реального часу. Мета роботи – моделювання розробленого алгоритму для побудови карт навколишнього середовища та визначення місця розташування й орієнтації інтелектуального робота в просторі в режимі реального часу за допомогою пакетів ROS. Завдання статті – демонстрація результатів поєднання методів SLAM та розроблення нових підходів до розв’язання проблем одночасної локалізації та картографування. Для досягнення поставлених завдань використано комбінацію методів лазерного сканування (2D LRF) та глибинного відтворення зображень (RGB-D) для одночасної локалізації та картографування інтелектуального робота та побудови 2,5D-карти середовища. Здобуті результати є обнадійливими та демонструють перспективність роботи об’єднаних методів SLAM, що застосовуються разом для забезпечення й очного виконання одночасної локалізації та картографування інтелектуальних роботів у режимі реального часу. Запропонований метод дає змогу враховувати висоти перешкод у побудові карти навколишнього середовища, витрачаючи менші обчислювальні потужності. У висновку такий підхід розширює технології, не замінюючи наявні робочі пропозиції, й уможливлює використання сучасних методів для всебічного виявлення та розпізнавання довкілля за допомогою ефективного локалізаційного та картографічного підходу, надаючи більш точні результати з використанням менших ресурсів
External multi-modal imaging sensor calibration for sensor fusion: A review
Multi-modal data fusion has gained popularity due to its diverse applications, leading to an increased demand for external sensor calibration. Despite several proven calibration solutions, they fail to fully satisfy all the evaluation criteria, including accuracy, automation, and robustness. Thus, this review aims to contribute to this growing field by examining recent research on multi-modal imaging sensor calibration and proposing future research directions. The literature review comprehensively explains the various characteristics and conditions of different multi-modal external calibration methods, including traditional motion-based calibration and feature-based calibration. Target-based calibration and targetless calibration are two types of feature-based calibration, which are discussed in detail. Furthermore, the paper highlights systematic calibration as an emerging research direction. Finally, this review concludes crucial factors for evaluating calibration methods and provides a comprehensive discussion on their applications, with the aim of providing valuable insights to guide future research directions. Future research should focus primarily on the capability of online targetless calibration and systematic multi-modal sensor calibration.Ministerio de Ciencia, Innovación y Universidades | Ref. PID2019-108816RB-I0
Visual-based SLAM configurations for cooperative multi-UAV systems with a lead agent: an observability-based approach
In this work, the problem of the cooperative visual-based SLAM for the class of multi-UA systems that integrates a lead agent has been addressed. In these kinds of systems, a team of aerial robots flying in formation must follow a dynamic lead agent, which can be another aerial robot, vehicle or even a human. A fundamental problem that must be addressed for these kinds of systems
has to do with the estimation of the states of the aerial robots as well as the state of the lead agent.
In this work, the use of a cooperative visual-based SLAM approach is studied in order to solve the above problem. In this case, three different system configurations are proposed and investigated by means of an intensive nonlinear observability analysis. In addition, a high-level control scheme is proposed that allows to control the formation of the UAVs with respect to the lead agent. In this work, several theoretical results are obtained, together with an extensive set of computer simulations which are presented in order to numerically validate the proposal and to show that it can perform well under different circumstances (e.g., GPS-challenging environments). That is, the proposed method is able to operate robustly under many conditions providing a good position estimation of the aerial vehicles and the lead agent as well.Peer ReviewedPostprint (published version
Study and application of motion measurement methods by means of opto-electronics systems - Studio e applicazione di metodi di misura del moto mediante sistemi opto-elettronici
This thesis addresses the problem of localizing a vehicle in unstructured environments through on-board instrumentation that does not require infrastructure modifications.
Two widely used opto-electronic systems which allow for non-contact measurements have been chosen: camera and laser range finder.
Particular attention is paid to the definition of a set of procedures for processing the environment information acquired with the instruments in order to provide both accuracy and robustness to measurement noise.
An important contribute of this work is the development of a robust and reliable algorithm for associating data that has been integrated in a graph based SLAM framework also taking into account uncertainty thus leading to an optimal vehicle motion estimation.
Moreover, the localization of the vehicle can be achieved in a generic environment since the developed global localization solution does not necessarily require the identification of landmarks in the environment, neither natural nor artificial.
Part of the work is dedicated to a thorough comparative analysis of the state-of-the-art scan matching methods in order to choose the best one to be employed in the solution pipeline.
In particular this investigation has highlighted that a dense scan matching approach can ensure good performances in many typical environments.
Several experiments in different environments, also with large scales, denote the effectiveness of the global localization system developed.
While the laser range data have been exploited for the global localization, a robust visual odometry has been investigated.
The results suggest that the use of camera can overcome the situations in which the solution achieved by the laser scanner has a low accuracy.
In particular the global localization framework can be applied also to the camera sensor, in order to perform a sensor fusion between two complementary instrumentations and so obtain a more reliable localization system.
The algorithms have been tested for 2D indoor environments, nevertheless it is expected that they are well suited also for 3D and outdoors
Simultaneous localization and mapping for inspection robots in water and sewer pipe networks: a review
At the present time, water and sewer pipe networks are predominantly inspected manually. In the near future, smart cities will perform intelligent autonomous monitoring of buried pipe networks, using teams of small robots. These robots, equipped with all necessary computational facilities and sensors (optical, acoustic, inertial, thermal, pressure and others) will be able to inspect pipes whilst navigating, selflocalising and communicating information about the pipe condition and faults such as leaks or blockages to human operators for monitoring and decision support. The predominantly manual inspection of pipe networks will be replaced with teams of autonomous inspection robots that can operate for long periods of time over a large spatial scale. Reliable autonomous navigation and reporting of faults at this scale requires effective localization and mapping, which is the estimation of the robot’s position and its surrounding environment. This survey presents an overview of state-of-the-art works on robot simultaneous localization and mapping (SLAM) with a focus on water and sewer pipe networks. It considers various aspects of the SLAM problem in pipes, from the motivation, to the water industry requirements, modern SLAM methods, map-types and sensors suited to pipes. Future challenges such as robustness for long term robot operation in pipes are discussed, including how making use of prior knowledge, e.g. geographic information systems (GIS) can be used to build map estimates, and improve the multi-robot SLAM in the pipe environmen
Autonomous navigation for guide following in crowded indoor environments
The requirements for assisted living are rapidly changing as the number of elderly
patients over the age of 60 continues to increase. This rise places a high level of stress on
nurse practitioners who must care for more patients than they are capable. As this trend is
expected to continue, new technology will be required to help care for patients. Mobile
robots present an opportunity to help alleviate the stress on nurse practitioners by
monitoring and performing remedial tasks for elderly patients. In order to produce
mobile robots with the ability to perform these tasks, however, many challenges must be
overcome.
The hospital environment requires a high level of safety to prevent patient injury. Any
facility that uses mobile robots, therefore, must be able to ensure that no harm will come
to patients whilst in a care environment. This requires the robot to build a high level of
understanding about the environment and the people with close proximity to the robot.
Hitherto, most mobile robots have used vision-based sensors or 2D laser range finders.
3D time-of-flight sensors have recently been introduced and provide dense 3D point
clouds of the environment at real-time frame rates. This provides mobile robots with
previously unavailable dense information in real-time. I investigate the use of time-of-flight
cameras for mobile robot navigation in crowded environments in this thesis. A
unified framework to allow the robot to follow a guide through an indoor environment
safely and efficiently is presented. Each component of the framework is analyzed in
detail, with real-world scenarios illustrating its practical use.
Time-of-flight cameras are relatively new sensors and, therefore, have inherent problems
that must be overcome to receive consistent and accurate data. I propose a novel and
practical probabilistic framework to overcome many of the inherent problems in this
thesis. The framework fuses multiple depth maps with color information forming a
reliable and consistent view of the world. In order for the robot to interact with the
environment, contextual information is required. To this end, I propose a region-growing
segmentation algorithm to group points based on surface characteristics, surface normal
and surface curvature. The segmentation process creates a distinct set of surfaces,
however, only a limited amount of contextual information is available to allow for
interaction. Therefore, a novel classifier is proposed using spherical harmonics to
differentiate people from all other objects.
The added ability to identify people allows the robot to find potential candidates to
follow. However, for safe navigation, the robot must continuously track all visible
objects to obtain positional and velocity information. A multi-object tracking system is
investigated to track visible objects reliably using multiple cues, shape and color. The
tracking system allows the robot to react to the dynamic nature of people by building an
estimate of the motion flow. This flow provides the robot with the necessary information
to determine where and at what speeds it is safe to drive. In addition, a novel search
strategy is proposed to allow the robot to recover a guide who has left the field-of-view.
To achieve this, a search map is constructed with areas of the environment ranked
according to how likely they are to reveal the guide’s true location. Then, the robot can
approach the most likely search area to recover the guide. Finally, all components
presented are joined to follow a guide through an indoor environment. The results
achieved demonstrate the efficacy of the proposed components
- …