26,269 research outputs found
Sensor Network Based Collision-Free Navigation and Map Building for Mobile Robots
Safe robot navigation is a fundamental research field for autonomous robots
including ground mobile robots and flying robots. The primary objective of a
safe robot navigation algorithm is to guide an autonomous robot from its
initial position to a target or along a desired path with obstacle avoidance.
With the development of information technology and sensor technology, the
implementations combining robotics with sensor network are focused on in the
recent researches. One of the relevant implementations is the sensor network
based robot navigation. Moreover, another important navigation problem of
robotics is safe area search and map building. In this report, a global
collision-free path planning algorithm for ground mobile robots in dynamic
environments is presented firstly. Considering the advantages of sensor
network, the presented path planning algorithm is developed to a sensor network
based navigation algorithm for ground mobile robots. The 2D range finder sensor
network is used in the presented method to detect static and dynamic obstacles.
The sensor network can guide each ground mobile robot in the detected safe area
to the target. Furthermore, the presented navigation algorithm is extended into
3D environments. With the measurements of the sensor network, any flying robot
in the workspace is navigated by the presented algorithm from the initial
position to the target. Moreover, in this report, another navigation problem,
safe area search and map building for ground mobile robot, is studied and two
algorithms are presented. In the first presented method, we consider a ground
mobile robot equipped with a 2D range finder sensor searching a bounded 2D area
without any collision and building a complete 2D map of the area. Furthermore,
the first presented map building algorithm is extended to another algorithm for
3D map building
Simultaneous Localization and Mapping by Cooperative Robots
The process of simultaneously localizing a mobile robot in an unknown environment while building a map of the environment, known as Simultaneous Localization and Mapping (SLAM), has been a central research topic in robotics in recent years. SLAM in dynamic and complex environments remains an open research problem. If properly designed, a team of robots could significantly increase the speed of mapping and make the system more robust since overlapping information can be used to verify proper functionality of the individual robots and the failure of a single robot does not hinder the overall mission. The purpose of this project is to implement SLAM in a dynamic indoor environment using multiple ground robots. This task requires the combination of inertial and visual sensors as well as active range-finders such as a Kinect Infrared Sensor. The first stage of this project focused on gaining a familiarity with the robotic platforms and software that were to be used by developing methods of data acquisition, sensor fusion, and map building. Also, a camera was implemented in order to detect moving objects and remove them from the map. Future steps of the project include combining local maps from single robots into a global map and gaining a familiarity with localization given the environment map, depth information, and on-board sensor measurements ultimately leading to the implementation of cooperative SLAM
Combining Subgoal Graphs with Reinforcement Learning to Build a Rational Pathfinder
In this paper, we present a hierarchical path planning framework called SG-RL
(subgoal graphs-reinforcement learning), to plan rational paths for agents
maneuvering in continuous and uncertain environments. By "rational", we mean
(1) efficient path planning to eliminate first-move lags; (2) collision-free
and smooth for agents with kinematic constraints satisfied. SG-RL works in a
two-level manner. At the first level, SG-RL uses a geometric path-planning
method, i.e., Simple Subgoal Graphs (SSG), to efficiently find optimal abstract
paths, also called subgoal sequences. At the second level, SG-RL uses an RL
method, i.e., Least-Squares Policy Iteration (LSPI), to learn near-optimal
motion-planning policies which can generate kinematically feasible and
collision-free trajectories between adjacent subgoals. The first advantage of
the proposed method is that SSG can solve the limitations of sparse reward and
local minima trap for RL agents; thus, LSPI can be used to generate paths in
complex environments. The second advantage is that, when the environment
changes slightly (i.e., unexpected obstacles appearing), SG-RL does not need to
reconstruct subgoal graphs and replan subgoal sequences using SSG, since LSPI
can deal with uncertainties by exploiting its generalization ability to handle
changes in environments. Simulation experiments in representative scenarios
demonstrate that, compared with existing methods, SG-RL can work well on
large-scale maps with relatively low action-switching frequencies and shorter
path lengths, and SG-RL can deal with small changes in environments. We further
demonstrate that the design of reward functions and the types of training
environments are important factors for learning feasible policies.Comment: 20 page
A Hierarchical Extension of the D ∗ Algorithm
In this paper a contribution to the practice of path planning using a new hierarchical
extension of the D
∗ algorithm is introduced. A hierarchical graph is stratified into several abstraction
levels and used to model environments for path planning. The hierarchical D∗ algorithm uses a downtop
strategy and a set of pre-calculated trajectories in order to improve performance. This allows
optimality and specially lower computational time. It is experimentally proved how hierarchical
search algorithms and on-line path planning algorithms based on topological abstractions can be
combined successfully
Experimental analysis of sample-based maps for long-term SLAM
This paper presents a system for long-term SLAM (simultaneous localization and mapping) by mobile service robots and its experimental evaluation in a real dynamic environment. To deal with the stability-plasticity dilemma (the trade-off between adaptation to new patterns and preservation of old patterns), the environment is represented at multiple timescales simultaneously (5 in our experiments). A sample-based representation is
proposed, where older memories fade at different rates depending on the timescale, and robust statistics are used to interpret the samples. The dynamics of this representation are analysed in a five week experiment, measuring the relative influence of short- and long-term memories over time, and further demonstrating the robustness of the approach
DS-SLAM: A Semantic Visual SLAM towards Dynamic Environments
Simultaneous Localization and Mapping (SLAM) is considered to be a
fundamental capability for intelligent mobile robots. Over the past decades,
many impressed SLAM systems have been developed and achieved good performance
under certain circumstances. However, some problems are still not well solved,
for example, how to tackle the moving objects in the dynamic environments, how
to make the robots truly understand the surroundings and accomplish advanced
tasks. In this paper, a robust semantic visual SLAM towards dynamic
environments named DS-SLAM is proposed. Five threads run in parallel in
DS-SLAM: tracking, semantic segmentation, local mapping, loop closing, and
dense semantic map creation. DS-SLAM combines semantic segmentation network
with moving consistency check method to reduce the impact of dynamic objects,
and thus the localization accuracy is highly improved in dynamic environments.
Meanwhile, a dense semantic octo-tree map is produced, which could be employed
for high-level tasks. We conduct experiments both on TUM RGB-D dataset and in
the real-world environment. The results demonstrate the absolute trajectory
accuracy in DS-SLAM can be improved by one order of magnitude compared with
ORB-SLAM2. It is one of the state-of-the-art SLAM systems in high-dynamic
environments. Now the code is available at our github:
https://github.com/ivipsourcecode/DS-SLAMComment: 7 pages, accepted at the 2018 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS 2018). Now the code is available at our
github: https://github.com/ivipsourcecode/DS-SLA
Knowledge Representation for Robots through Human-Robot Interaction
The representation of the knowledge needed by a robot to perform complex
tasks is restricted by the limitations of perception. One possible way of
overcoming this situation and designing "knowledgeable" robots is to rely on
the interaction with the user. We propose a multi-modal interaction framework
that allows to effectively acquire knowledge about the environment where the
robot operates. In particular, in this paper we present a rich representation
framework that can be automatically built from the metric map annotated with
the indications provided by the user. Such a representation, allows then the
robot to ground complex referential expressions for motion commands and to
devise topological navigation plans to achieve the target locations.Comment: Knowledge Representation and Reasoning in Robotics Workshop at ICLP
201
- …