2,417 research outputs found
Long-term experiments with an adaptive spherical view representation for navigation in changing environments
Real-world environments such as houses and offices change over time, meaning that a mobile robot’s map will become out of date. In this work, we introduce a method to update the reference views in a hybrid metric-topological map so that a mobile robot can continue to localize itself in a changing environment. The updating mechanism, based on the multi-store model of human memory, incorporates a spherical metric representation of the observed visual features for each node in the map, which enables the robot to estimate its heading and navigate using multi-view geometry, as well as representing the local 3D geometry of the environment. A series of experiments demonstrate the persistence performance of the proposed system in real changing environments, including analysis of the long-term stability
Two-Stage Focused Inference for Resource-Constrained Collision-Free Navigation
Long-term operations of resource-constrained robots typically require hard decisions be made about which data to process and/or retain. The question then arises of how to choose which data is most useful to keep to achieve the task at hand. As spacial scale grows, the size of the map will grow without bound, and as temporal scale grows, the number of measurements will grow without bound. In this work, we present the first known approach to tackle both of these issues. The approach has two stages. First, a subset of the variables (focused variables) is selected that are most useful for a particular task. Second, a task-agnostic and principled method (focused inference) is proposed to select a subset of the measurements that maximizes the information over the focused variables. The approach is then applied to the specific task of robot navigation in an obstacle-laden environment. A landmark selection method is proposed to minimize the probability of collision and then select the set of measurements that best localizes those landmarks. It is shown that the two-stage approach outperforms both only selecting measurement and only selecting landmarks in terms of minimizing the probability of collision. The performance improvement is validated through detailed simulation and real experiments on a Pioneer robot.United States. Army Research Office. Multidisciplinary University Research Initiative (Grant W911NF-11-1-0391)United States. Office of Naval Research (Grant N00014-11-1-0688)National Science Foundation (U.S.) (Award IIS-1318392
Reinforcement Learning with Frontier-Based Exploration via Autonomous Environment
Active Simultaneous Localisation and Mapping (SLAM) is a critical problem in
autonomous robotics, enabling robots to navigate to new regions while building
an accurate model of their surroundings. Visual SLAM is a popular technique
that uses virtual elements to enhance the experience. However, existing
frontier-based exploration strategies can lead to a non-optimal path in
scenarios where there are multiple frontiers with similar distance. This issue
can impact the efficiency and accuracy of Visual SLAM, which is crucial for a
wide range of robotic applications, such as search and rescue, exploration, and
mapping. To address this issue, this research combines both an existing
Visual-Graph SLAM known as ExploreORB with reinforcement learning. The proposed
algorithm allows the robot to learn and optimize exploration routes through a
reward-based system to create an accurate map of the environment with proper
frontier selection. Frontier-based exploration is used to detect unexplored
areas, while reinforcement learning optimizes the robot's movement by assigning
rewards for optimal frontier points. Graph SLAM is then used to integrate the
robot's sensory data and build an accurate map of the environment. The proposed
algorithm aims to improve the efficiency and accuracy of ExploreORB by
optimizing the exploration process of frontiers to build a more accurate map.
To evaluate the effectiveness of the proposed approach, experiments will be
conducted in various virtual environments using Gazebo, a robot simulation
software. Results of these experiments will be compared with existing methods
to demonstrate the potential of the proposed approach as an optimal solution
for SLAM in autonomous robotics.Comment: 23 pages, Journa
A Real-Time Unsupervised Neural Network for the Low-Level Control of a Mobile Robot in a Nonstationary Environment
This article introduces a real-time, unsupervised neural network that learns to control a two-degree-of-freedom mobile robot in a nonstationary environment. The neural controller, which is termed neural NETwork MObile Robot Controller (NETMORC), combines associative learning and Vector Associative Map (YAM) learning to generate transformations between spatial and velocity coordinates. As a result, the controller learns the wheel velocities required to reach a target at an arbitrary distance and angle. The transformations are learned during an unsupervised training phase, during which the robot moves as a result of randomly selected wheel velocities. The robot learns the relationship between these velocities and the resulting incremental movements. Aside form being able to reach stationary or moving targets, the NETMORC structure also enables the robot to perform successfully in spite of disturbances in the enviroment, such as wheel slippage, or changes in the robot's plant, including changes in wheel radius, changes in inter-wheel distance, or changes in the internal time step of the system. Finally, the controller is extended to include a module that learns an internal odometric transformation, allowing the robot to reach targets when visual input is sporadic or unreliable.Sloan Fellowship (BR-3122), Air Force Office of Scientific Research (F49620-92-J-0499
Simultaneous localization and map-building using active vision
An active approach to sensing can provide the focused measurement capability over a wide field of view which allows correctly formulated Simultaneous Localization and Map-Building (SLAM) to be implemented with vision, permitting repeatable long-term localization using only naturally occurring, automatically-detected features. In this paper, we present the first example of a general system for autonomous localization using active vision, enabled here by a high-performance stereo head, addressing such issues as uncertainty-based measurement selection, automatic map-maintenance, and goal-directed steering. We present varied real-time experiments in a complex environment.Published versio
Informative Path Planning for Active Field Mapping under Localization Uncertainty
Information gathering algorithms play a key role in unlocking the potential
of robots for efficient data collection in a wide range of applications.
However, most existing strategies neglect the fundamental problem of the robot
pose uncertainty, which is an implicit requirement for creating robust,
high-quality maps. To address this issue, we introduce an informative planning
framework for active mapping that explicitly accounts for the pose uncertainty
in both the mapping and planning tasks. Our strategy exploits a Gaussian
Process (GP) model to capture a target environmental field given the
uncertainty on its inputs. For planning, we formulate a new utility function
that couples the localization and field mapping objectives in GP-based mapping
scenarios in a principled way, without relying on any manually tuned
parameters. Extensive simulations show that our approach outperforms existing
strategies, with reductions in mean pose uncertainty and map error. We also
present a proof of concept in an indoor temperature mapping scenario.Comment: 8 pages, 7 figures, submission (revised) to Robotics & Automation
Letters (and IEEE International Conference on Robotics and Automation
An adaptive spherical view representation for navigation in changing environments
Real-world environments such as houses and offices change over time, meaning that a mobile robot’s map will become out of date. In previous work we introduced a method to update the reference views in a topological map so that a mobile robot could continue to localize itself in a changing environment using omni-directional vision. In this work we extend this longterm updating mechanism to incorporate a spherical metric representation of the observed visual features for each node in the topological map. Using multi-view geometry we are then able to estimate the heading of the robot, in order to enable navigation between the nodes of the map, and to simultaneously adapt the spherical view representation in response to environmental changes. The results demonstrate the persistent performance of the proposed system in a long-term experiment
- …