1,694 research outputs found
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
From Monocular SLAM to Autonomous Drone Exploration
Micro aerial vehicles (MAVs) are strongly limited in their payload and power
capacity. In order to implement autonomous navigation, algorithms are therefore
desirable that use sensory equipment that is as small, low-weight, and
low-power consuming as possible. In this paper, we propose a method for
autonomous MAV navigation and exploration using a low-cost consumer-grade
quadrocopter equipped with a monocular camera. Our vision-based navigation
system builds on LSD-SLAM which estimates the MAV trajectory and a semi-dense
reconstruction of the environment in real-time. Since LSD-SLAM only determines
depth at high gradient pixels, texture-less areas are not directly observed so
that previous exploration methods that assume dense map information cannot
directly be applied. We propose an obstacle mapping and exploration approach
that takes the properties of our semi-dense monocular SLAM system into account.
In experiments, we demonstrate our vision-based autonomous navigation and
exploration system with a Parrot Bebop MAV
Scan matching by cross-correlation and differential evolution
Scan matching is an important task, solved in the context of many high-level problems including pose estimation, indoor localization, simultaneous localization and mapping and others. Methods that are accurate and adaptive and at the same time computationally efficient are required to enable location-based services in autonomous mobile devices. Such devices usually have a wide range of high-resolution sensors but only a limited processing power and constrained energy supply. This work introduces a novel high-level scan matching strategy that uses a combination of two advanced algorithms recently used in this field: cross-correlation and differential evolution. The cross-correlation between two laser range scans is used as an efficient measure of scan alignment and the differential evolution algorithm is used to search for the parameters of a transformation that aligns the scans. The proposed method was experimentally validated and showed good ability to match laser range scans taken shortly after each other and an excellent ability to match laser range scans taken with longer time intervals between them.Web of Science88art. no. 85
Augmented indoor hybrid maps using catadioptric vision
En este Trabajo de Fin de Máster se presenta un nuevo método para crear mapas semánticos a partir de secuencias de imágenes omnidireccionales. El objetivo es diseñar el nivel superior de un mapa jerárquico: mapa semántico o mapa topológico aumentado, aprovechando y adaptando este tipo de cámaras. La segmentación de la secuencia de imágenes se realiza distinguiendo entre Lugares y Transiciones, poniendo especial énfasis en la detección de estas Transiciones ya que aportan una información muy útil e importante al mapa. Dentro de los Lugares se hace una clasificación más detallada entre pasillos y habitaciones de distintos tipos. Y dentro de las Transiciones distinguiremos entre puertas, jambas, escaleras y ascensores, que son los principales tipos de Transiciones que aparecen en escenarios de interior. Para la segmentación del espacio en estos tipos de áreas se han utilizado solo descriptores de imagen globales, en concreto Gist. La gran ventaja de usar este tipo de descriptores es la mayor eficiencia y compacidad frente al uso de descriptores locales. Además para mantener la consistencia espacio-temporal de la secuencia de imágenes, se hace uso de un modelo probabilístico: Modelo Oculto de Markov (HMM). A pesar de la simplicidad del método, los resultados muestran cómo es capaz de realizar una segmentación de la secuencia de imágenes en clusters con significado para las personas. Todos los experimentos se han llevado a cabo utilizando nuestro nuevo data set de imágenes omnidireccionales, capturado con una cámara montada en un casco, por lo que la secuencia sigue el movimiento de una persona durante su desplazamiento dentro de un edificio. El data set se encuentra público en Internet para que pueda ser utilizado en otras investigaciones
Autonomous Navigation in Complex Indoor and Outdoor Environments with Micro Aerial Vehicles
Micro aerial vehicles (MAVs) are ideal platforms for surveillance and search and rescue in confined indoor and outdoor environments due to their small size, superior mobility, and hover capability. In such missions, it is essential that the MAV is capable of autonomous flight to minimize operator workload. Despite recent successes in commercialization of GPS-based autonomous MAVs, autonomous navigation in complex and possibly GPS-denied environments gives rise to challenging engineering problems that require an integrated approach to perception, estimation, planning, control, and high level situational awareness. Among these, state estimation is the first and most critical component for autonomous flight, especially because of the inherently fast dynamics of MAVs and the possibly unknown environmental conditions. In this thesis, we present methodologies and system designs, with a focus on state estimation, that enable a light-weight off-the-shelf quadrotor MAV to autonomously navigate complex unknown indoor and outdoor environments using only onboard sensing and computation. We start by developing laser and vision-based state estimation methodologies for indoor autonomous flight. We then investigate fusion from heterogeneous sensors to improve robustness and enable operations in complex indoor and outdoor environments. We further propose estimation algorithms for on-the-fly initialization and online failure recovery. Finally, we present planning, control, and environment coverage strategies for integrated high-level autonomy behaviors. Extensive online experimental results are presented throughout the thesis. We conclude by proposing future research opportunities
Sensor fusion for flexible human-portable building-scale mapping
This paper describes a system enabling rapid multi-floor indoor map building using a body-worn sensor system fusing information from RGB-D cameras, LIDAR, inertial, and barometric sensors. Our work is motivated by rapid response missions by emergency personnel, in which the capability for one or more people to rapidly map a complex indoor environment is essential for public safety. Human-portable mapping raises a number of challenges not encountered in typical robotic mapping applications including complex 6-DOF motion and the traversal of challenging trajectories including stairs or elevators. Our system achieves robust performance in these situations by exploiting state-of-the-art techniques for robust pose graph optimization and loop closure detection. It achieves real-time performance in indoor environments of moderate scale. Experimental results are demonstrated for human-portable mapping of several floors of a university building, demonstrating the system's ability to handle motion up and down stairs and to organize initially disconnected sets of submaps in a complex environment.Lincoln LaboratoryUnited States. Air Force (Contract FA8721-05-C-0002)United States. Office of Naval Research (Grant N00014-10-1-0936)United States. Office of Naval Research (Grant N00014-11-1-0688)United States. Office of Naval Research (Grant N00014-12-10020
Challenges and solutions for autonomous ground robot scene understanding and navigation in unstructured outdoor environments: A review
The capabilities of autonomous mobile robotic systems have been steadily improving due to recent advancements in computer science, engineering, and related disciplines such as cognitive science. In controlled environments, robots have achieved relatively high levels of autonomy. In more unstructured environments, however, the development of fully autonomous mobile robots remains challenging due to the complexity of understanding these environments. Many autonomous mobile robots use classical, learning-based or hybrid approaches for navigation. More recent learning-based methods may replace the complete navigation pipeline or selected stages of the classical approach. For effective deployment, autonomous robots must understand their external environments at a sophisticated level according to their intended applications. Therefore, in addition to robot perception, scene analysis and higher-level scene understanding (e.g., traversable/non-traversable, rough or smooth terrain, etc.) are required for autonomous robot navigation in unstructured outdoor environments. This paper provides a comprehensive review and critical analysis of these methods in the context of their applications to the problems of robot perception and scene understanding in unstructured environments and the related problems of localisation, environment mapping and path planning. State-of-the-art sensor fusion methods and multimodal scene understanding approaches are also discussed and evaluated within this context. The paper concludes with an in-depth discussion regarding the current state of the autonomous ground robot navigation challenge in unstructured outdoor environments and the most promising future research directions to overcome these challenges
- …