1,330 research outputs found
Attention and Anticipation in Fast Visual-Inertial Navigation
We study a Visual-Inertial Navigation (VIN) problem in which a robot needs to
estimate its state using an on-board camera and an inertial sensor, without any
prior knowledge of the external environment. We consider the case in which the
robot can allocate limited resources to VIN, due to tight computational
constraints. Therefore, we answer the following question: under limited
resources, what are the most relevant visual cues to maximize the performance
of visual-inertial navigation? Our approach has four key ingredients. First, it
is task-driven, in that the selection of the visual cues is guided by a metric
quantifying the VIN performance. Second, it exploits the notion of
anticipation, since it uses a simplified model for forward-simulation of robot
dynamics, predicting the utility of a set of visual cues over a future time
horizon. Third, it is efficient and easy to implement, since it leads to a
greedy algorithm for the selection of the most relevant visual cues. Fourth, it
provides formal performance guarantees: we leverage submodularity to prove that
the greedy selection cannot be far from the optimal (combinatorial) selection.
Simulations and real experiments on agile drones show that our approach ensures
state-of-the-art VIN performance while maintaining a lean processing time. In
the easy scenarios, our approach outperforms appearance-based feature selection
in terms of localization errors. In the most challenging scenarios, it enables
accurate visual-inertial navigation while appearance-based feature selection
fails to track robot's motion during aggressive maneuvers.Comment: 20 pages, 7 figures, 2 table
OASIS: Optimal Arrangements for Sensing in SLAM
The number and arrangement of sensors on an autonomous mobile robot
dramatically influence its perception capabilities. Ensuring that sensors are
mounted in a manner that enables accurate detection, localization, and mapping
is essential for the success of downstream control tasks. However, when
designing a new robotic platform, researchers and practitioners alike usually
mimic standard configurations or maximize simple heuristics like field-of-view
(FOV) coverage to decide where to place exteroceptive sensors. In this work, we
conduct an information-theoretic investigation of this overlooked element of
mobile robotic perception in the context of simultaneous localization and
mapping (SLAM). We show how to formalize the sensor arrangement problem as a
form of subset selection under the E-optimality performance criterion. While
this formulation is NP-hard in general, we further show that a combination of
greedy sensor selection and fast convex relaxation-based post-hoc verification
enables the efficient recovery of certifiably optimal sensor designs in
practice. Results from synthetic experiments reveal that sensors placed with
OASIS outperform benchmarks in terms of mean squared error of visual SLAM
estimates
Learning how to combine sensory-motor functions into a robust behavior
AbstractThis article describes a system, called Robel, for defining a robot controller that learns from experience very robust ways of performing a high-level task such as “navigate to”. The designer specifies a collection of skills, represented as hierarchical tasks networks, whose primitives are sensory-motor functions. The skills provide different ways of combining these sensory-motor functions to achieve the desired task. The specified skills are assumed to be complementary and to cover different situations. The relationship between control states, defined through a set of task-dependent features, and the appropriate skills for pursuing the task is learned as a finite observable Markov decision process (MDP). This MDP provides a general policy for the task; it is independent of the environment and characterizes the abilities of the robot for the task
A Multiagent Approach to Qualitative Navigation in Robotics
Navigation in unknown unstructured environments is still a difficult open problem in the field of robotics. In this PhD thesis we present a novel approach for robot navigation based on the combination of landmark-based navigation, fuzzy distances and angles representation and multiagent coordination based on a bidding mechanism. The objective has been to have a robust navigation system with orientation sense for unstructured environments using visual information. To achieve such objective we have focused our efforts on two main threads: navigation and mapping methods, and control architectures for autonomous robots. Regarding the navigation and mapping task, we have extended the work presented by Prescott, so that it can be used with fuzzy information about the locations of landmarks in the environment. Together with this extension, we have also developed methods to compute diverting targets, needed by the robot when it gets blocked. Regarding the control architecture, we have proposed a general architecture that uses a bidding mechanism to coordinate a group of systems that control the robot. This mechanism can be used at different levels of the control architecture. In our case, we have used it to coordinate the three systems of the robot (Navigation, Pilot and Vision systems) and also to coordinate the agents that compose the Navigation system itself. Using this bidding mechanism the action actually being executed by the robot is the most valued one at each point in time, so, given that the agents bid rationally, the dynamics of the biddings would lead the robot to execute the necessary actions in order to reach a given target. The advantage of using such mechanism is that there is no need to create a hierarchy, such in the subsumption architecture, but it is dynamically changing depending on the specific situation of the robot and the characteristics of the environment. We have obtained successful results, both on simulation and on real experimentation, showing that the mapping system is capable of building a map of an unknown environment and use this information to move the robot from a starting point to a given target. The experimentation also showed that the bidding mechanism we designed for controlling the robot produces the overall behavior of executing the proper action at each moment in order to reach the target
Advances in Robot Navigation
Robot navigation includes different interrelated activities such as perception - obtaining and interpreting sensory information; exploration - the strategy that guides the robot to select the next direction to go; mapping - the construction of a spatial representation by using the sensory information perceived; localization - the strategy to estimate the robot position within the spatial map; path planning - the strategy to find a path towards a goal location being optimal or not; and path execution, where motor actions are determined and adapted to environmental changes. This book integrates results from the research work of authors all over the world, addressing the abovementioned activities and analyzing the critical implications of dealing with dynamic environments. Different solutions providing adaptive navigation are taken from nature inspiration, and diverse applications are described in the context of an important field of study: social robotics
Active SLAM: A Review On Last Decade
This article presents a comprehensive review of the Active Simultaneous
Localization and Mapping (A-SLAM) research conducted over the past decade. It
explores the formulation, applications, and methodologies employed in A-SLAM,
particularly in trajectory generation and control-action selection, drawing on
concepts from Information Theory (IT) and the Theory of Optimal Experimental
Design (TOED). This review includes both qualitative and quantitative analyses
of various approaches, deployment scenarios, configurations, path-planning
methods, and utility functions within A-SLAM research. Furthermore, this
article introduces a novel analysis of Active Collaborative SLAM (AC-SLAM),
focusing on collaborative aspects within SLAM systems. It includes a thorough
examination of collaborative parameters and approaches, supported by both
qualitative and statistical assessments. This study also identifies limitations
in the existing literature and suggests potential avenues for future research.
This survey serves as a valuable resource for researchers seeking insights into
A-SLAM methods and techniques, offering a current overview of A-SLAM
formulation.Comment: 34 pages, 8 figures, 6 table
Active SLAM for autonomous underwater exploration
Exploration of a complex underwater environment without an a priori map is beyond the state of the art for autonomous underwater vehicles (AUVs). Despite several efforts regarding simultaneous localization and mapping (SLAM) and view planning, there is no exploration framework, tailored to underwater vehicles, that faces exploration combining mapping, active localization, and view planning in a unified way. We propose an exploration framework, based on an active SLAM strategy, that combines three main elements: a view planner, an iterative closest point algorithm (ICP)-based pose-graph SLAM algorithm, and an action selection mechanism that makes use of the joint map and state entropy reduction. To demonstrate the benefits of the active SLAM strategy, several tests were conducted with the Girona 500 AUV, both in simulation and in the real world. The article shows how the proposed framework makes it possible to plan exploratory trajectories that keep the vehicle’s uncertainty bounded; thus, creating more consistent maps.Peer ReviewedPostprint (published version
Reinforcement Learning with Frontier-Based Exploration via Autonomous Environment
Active Simultaneous Localisation and Mapping (SLAM) is a critical problem in
autonomous robotics, enabling robots to navigate to new regions while building
an accurate model of their surroundings. Visual SLAM is a popular technique
that uses virtual elements to enhance the experience. However, existing
frontier-based exploration strategies can lead to a non-optimal path in
scenarios where there are multiple frontiers with similar distance. This issue
can impact the efficiency and accuracy of Visual SLAM, which is crucial for a
wide range of robotic applications, such as search and rescue, exploration, and
mapping. To address this issue, this research combines both an existing
Visual-Graph SLAM known as ExploreORB with reinforcement learning. The proposed
algorithm allows the robot to learn and optimize exploration routes through a
reward-based system to create an accurate map of the environment with proper
frontier selection. Frontier-based exploration is used to detect unexplored
areas, while reinforcement learning optimizes the robot's movement by assigning
rewards for optimal frontier points. Graph SLAM is then used to integrate the
robot's sensory data and build an accurate map of the environment. The proposed
algorithm aims to improve the efficiency and accuracy of ExploreORB by
optimizing the exploration process of frontiers to build a more accurate map.
To evaluate the effectiveness of the proposed approach, experiments will be
conducted in various virtual environments using Gazebo, a robot simulation
software. Results of these experiments will be compared with existing methods
to demonstrate the potential of the proposed approach as an optimal solution
for SLAM in autonomous robotics.Comment: 23 pages, Journa
- …