820 research outputs found
Challenges and solutions for autonomous ground robot scene understanding and navigation in unstructured outdoor environments: A review
The capabilities of autonomous mobile robotic systems have been steadily improving due to recent advancements in computer science, engineering, and related disciplines such as cognitive science. In controlled environments, robots have achieved relatively high levels of autonomy. In more unstructured environments, however, the development of fully autonomous mobile robots remains challenging due to the complexity of understanding these environments. Many autonomous mobile robots use classical, learning-based or hybrid approaches for navigation. More recent learning-based methods may replace the complete navigation pipeline or selected stages of the classical approach. For effective deployment, autonomous robots must understand their external environments at a sophisticated level according to their intended applications. Therefore, in addition to robot perception, scene analysis and higher-level scene understanding (e.g., traversable/non-traversable, rough or smooth terrain, etc.) are required for autonomous robot navigation in unstructured outdoor environments. This paper provides a comprehensive review and critical analysis of these methods in the context of their applications to the problems of robot perception and scene understanding in unstructured environments and the related problems of localisation, environment mapping and path planning. State-of-the-art sensor fusion methods and multimodal scene understanding approaches are also discussed and evaluated within this context. The paper concludes with an in-depth discussion regarding the current state of the autonomous ground robot navigation challenge in unstructured outdoor environments and the most promising future research directions to overcome these challenges
A SPATIAL MEASUREMENT AND RECOGNITION SYSTEM USING AUTONOMOUS MOBILE ROBOT
In this paper, an autonomous mobile robot (AMR) is designed to determine the lateral dimensions of an arbitrary enclosed space and to predict its area and shape. The robot operates in two modes, navigation and measurement modes. It uses the ultrasonic sensor to guide around obstacles in the navigation mode and also to calculate the area, in measurement mode, by determining the x-y dimensions. Communication with the robot is achieved by means of a Bluetooth connection to an android mobile phone. Extracted information from measurement times are found to be useful in tracking the path of the autonomous mobile robot
Design and Development of an Inspection Robotic System for Indoor Applications
The inspection and monitoring of industrial sites, structures, and infrastructure are important issues for their sustainability and further maintenance. Although these tasks are repetitive and time consuming, and some of these environments may be characterized by dust, humidity, or absence of natural light, classical approach relies on large human activities. Automatic or robotic solutions can be considered useful tools for inspection because they can be effective in exploring dangerous or inaccessible sites, at relatively low-cost and reducing the time required for the relief. The development of a paradigmatic system called Inspection Robotic System (IRS) is the main objective of this paper to demonstrate the feasibility of mechatronic solutions for inspection of industrial sites. The development of such systems will be exploited in the form of a tool kit to be flexible and installed on a mobile system, in order to be used for inspection and monitoring, possibly introducing high efficiency, quality and repetitiveness in the related sector. The interoperability of sensors with wireless communication may form a smart sensors tool kit and a smart sensor network with powerful functions to be effectively used for inspection purposes. Moreover, it may constitute a solution for a broad range of scenarios spacing from industrial sites, brownfields, historical sites or sites dangerous or difficult to access by operators. First experimental tests are reported to show the engineering feasibility of the system and interoperability of the mobile hybrid robot equipped with sensors that allow real-time multiple acquisition and storage
Path planning algorithms for autonomous navigation of a non-holonomic robot in unstructured environments
openPath planning is a crucial aspect of autonomous robot navigation, enabling robots to efficiently and safely navigate through complex environments. This thesis focuses on autonomous navigation for robots in dynamic and uncertain environments. In particular, the project aims to analyze the localization and path planning problems. A fundamental review of the existing literature on path planning algorithms has been carried on. Various factors affecting path planning, such as sensor data fusion, map representation, and motion constraints, are also analyzed. Thanks to the collaboration with E80 Group S.p.A., the project has been developed using ROS (Robot Operating System) on a Clearpath Dingo-O, an indoor mobile robot. To address the challenges posed by unstructured and dynamic environments, ROS follows a combined approach of using a global planner and a local planner. The global planner generates a high-level path, considering the overall environment, while the local planner handles real-time adjustments to avoid moving obstacles and optimize the trajectory. This thesis describes the role of the global planner in a ROS-framework. Performance benchmarking of traditional algorithms like Dijkstra and A*, as well as other techniques, is fundamental in order to understand the limits of these methods. In the end, the Hybrid A* algorithm is introduced as a promising approach for addressing the issues of unstructured environments for autonomous navigation of a non-holonomic robot. The core concepts and implementation details of the algorithm are discussed, emphasizing its ability to efficiently explore continuous state spaces and generate drivable paths.The effectiveness of the proposed path planning algorithms is evaluated through extensive simulations and real-world experiments using the mobile platform. Performance metrics such as path length, execution time, and collision avoidance are analyzed to assess the efficiency and reliability of the algorithms.Path planning is a crucial aspect of autonomous robot navigation, enabling robots to efficiently and safely navigate through complex environments. This thesis focuses on autonomous navigation for robots in dynamic and uncertain environments. In particular, the project aims to analyze the localization and path planning problems. A fundamental review of the existing literature on path planning algorithms has been carried on. Various factors affecting path planning, such as sensor data fusion, map representation, and motion constraints, are also analyzed. Thanks to the collaboration with E80 Group S.p.A., the project has been developed using ROS (Robot Operating System) on a Clearpath Dingo-O, an indoor mobile robot. To address the challenges posed by unstructured and dynamic environments, ROS follows a combined approach of using a global planner and a local planner. The global planner generates a high-level path, considering the overall environment, while the local planner handles real-time adjustments to avoid moving obstacles and optimize the trajectory. This thesis describes the role of the global planner in a ROS-framework. Performance benchmarking of traditional algorithms like Dijkstra and A*, as well as other techniques, is fundamental in order to understand the limits of these methods. In the end, the Hybrid A* algorithm is introduced as a promising approach for addressing the issues of unstructured environments for autonomous navigation of a non-holonomic robot. The core concepts and implementation details of the algorithm are discussed, emphasizing its ability to efficiently explore continuous state spaces and generate drivable paths.The effectiveness of the proposed path planning algorithms is evaluated through extensive simulations and real-world experiments using the mobile platform. Performance metrics such as path length, execution time, and collision avoidance are analyzed to assess the efficiency and reliability of the algorithms
Perception system and functions for autonomous navigation in a natural environment
This paper presents the approach, algorithms, and processes we developed for the perception system of a cross-country autonomous robot. After a presentation of the tele-programming context we favor for intervention robots, we introduce an adaptive navigation approach, well suited for the characteristics of complex natural environments. This approach lead us to develop a heterogeneous perception system that manages several different terrain representatives. The perception functionalities required during navigation are listed, along with the corresponding representations we consider. The main perception processes we developed are presented. They are integrated within an on-board control architecture we developed. First results of an ambitious experiment currently underway at LAAS are then presented
Learning Ground Traversability from Simulations
Mobile ground robots operating on unstructured terrain must predict which
areas of the environment they are able to pass in order to plan feasible paths.
We address traversability estimation as a heightmap classification problem: we
build a convolutional neural network that, given an image representing the
heightmap of a terrain patch, predicts whether the robot will be able to
traverse such patch from left to right. The classifier is trained for a
specific robot model (wheeled, tracked, legged, snake-like) using simulation
data on procedurally generated training terrains; the trained classifier can be
applied to unseen large heightmaps to yield oriented traversability maps, and
then plan traversable paths. We extensively evaluate the approach in simulation
on six real-world elevation datasets, and run a real-robot validation in one
indoor and one outdoor environment.Comment: Webpage: http://romarcg.xyz/traversability_estimation
Reinforcement and Curriculum Learning for Off-Road Navigation of an UGV with a 3D LiDAR
This paper presents the use of deep Reinforcement Learning (RL) for autonomous navigation
of an Unmanned Ground Vehicle (UGV) with an onboard three-dimensional (3D) Light Detection
and Ranging (LiDAR) sensor in off-road environments. For training, both the robotic simulator
Gazebo and the Curriculum Learning paradigm are applied. Furthermore, an Actor–Critic Neural
Network (NN) scheme is chosen with a suitable state and a custom reward function. To employ the
3D LiDAR data as part of the input state of the NNs, a virtual two-dimensional (2D) traversability
scanner is developed. The resulting Actor NN has been successfully tested in both real and simulated
experiments and favorably compared with a previous reactive navigation approach on the same UGV.Partial funding for open access charge: Universidad de Málag
- …