5,741 research outputs found
Learning to Prevent Monocular SLAM Failure using Reinforcement Learning
Monocular SLAM refers to using a single camera to estimate robot ego motion
while building a map of the environment. While Monocular SLAM is a well studied
problem, automating Monocular SLAM by integrating it with trajectory planning
frameworks is particularly challenging. This paper presents a novel formulation
based on Reinforcement Learning (RL) that generates fail safe trajectories
wherein the SLAM generated outputs do not deviate largely from their true
values. Quintessentially, the RL framework successfully learns the otherwise
complex relation between perceptual inputs and motor actions and uses this
knowledge to generate trajectories that do not cause failure of SLAM. We show
systematically in simulations how the quality of the SLAM dramatically improves
when trajectories are computed using RL. Our method scales effectively across
Monocular SLAM frameworks in both simulation and in real world experiments with
a mobile robot.Comment: Accepted at the 11th Indian Conference on Computer Vision, Graphics
and Image Processing (ICVGIP) 2018 More info can be found at the project page
at https://robotics.iiit.ac.in/people/vignesh.prasad/SLAMSafePlanner.html and
the supplementary video can be found at
https://www.youtube.com/watch?v=420QmM_Z8v
Combining Subgoal Graphs with Reinforcement Learning to Build a Rational Pathfinder
In this paper, we present a hierarchical path planning framework called SG-RL
(subgoal graphs-reinforcement learning), to plan rational paths for agents
maneuvering in continuous and uncertain environments. By "rational", we mean
(1) efficient path planning to eliminate first-move lags; (2) collision-free
and smooth for agents with kinematic constraints satisfied. SG-RL works in a
two-level manner. At the first level, SG-RL uses a geometric path-planning
method, i.e., Simple Subgoal Graphs (SSG), to efficiently find optimal abstract
paths, also called subgoal sequences. At the second level, SG-RL uses an RL
method, i.e., Least-Squares Policy Iteration (LSPI), to learn near-optimal
motion-planning policies which can generate kinematically feasible and
collision-free trajectories between adjacent subgoals. The first advantage of
the proposed method is that SSG can solve the limitations of sparse reward and
local minima trap for RL agents; thus, LSPI can be used to generate paths in
complex environments. The second advantage is that, when the environment
changes slightly (i.e., unexpected obstacles appearing), SG-RL does not need to
reconstruct subgoal graphs and replan subgoal sequences using SSG, since LSPI
can deal with uncertainties by exploiting its generalization ability to handle
changes in environments. Simulation experiments in representative scenarios
demonstrate that, compared with existing methods, SG-RL can work well on
large-scale maps with relatively low action-switching frequencies and shorter
path lengths, and SG-RL can deal with small changes in environments. We further
demonstrate that the design of reward functions and the types of training
environments are important factors for learning feasible policies.Comment: 20 page
CoverNav: Cover Following Navigation Planning in Unstructured Outdoor Environment with Deep Reinforcement Learning
Autonomous navigation in offroad environments has been extensively studied in
the robotics field. However, navigation in covert situations where an
autonomous vehicle needs to remain hidden from outside observers remains an
underexplored area. In this paper, we propose a novel Deep Reinforcement
Learning (DRL) based algorithm, called CoverNav, for identifying covert and
navigable trajectories with minimal cost in offroad terrains and jungle
environments in the presence of observers. CoverNav focuses on unmanned ground
vehicles seeking shelters and taking covers while safely navigating to a
predefined destination. Our proposed DRL method computes a local cost map that
helps distinguish which path will grant the maximal covertness while
maintaining a low cost trajectory using an elevation map generated from 3D
point cloud data, the robot's pose, and directed goal information. CoverNav
helps robot agents to learn the low elevation terrain using a reward function
while penalizing it proportionately when it experiences high elevation. If an
observer is spotted, CoverNav enables the robot to select natural obstacles
(e.g., rocks, houses, disabled vehicles, trees, etc.) and use them as shelters
to hide behind. We evaluate CoverNav using the Unity simulation environment and
show that it guarantees dynamically feasible velocities in the terrain when fed
with an elevation map generated by another DRL based navigation algorithm.
Additionally, we evaluate CoverNav's effectiveness in achieving a maximum goal
distance of 12 meters and its success rate in different elevation scenarios
with and without cover objects. We observe competitive performance comparable
to state of the art (SOTA) methods without compromising accuracy
Limited Visibility and Uncertainty Aware Motion Planning for Automated Driving
Adverse weather conditions and occlusions in urban environments result in
impaired perception. The uncertainties are handled in different modules of an
automated vehicle, ranging from sensor level over situation prediction until
motion planning. This paper focuses on motion planning given an uncertain
environment model with occlusions. We present a method to remain collision free
for the worst-case evolution of the given scene. We define criteria that
measure the available margins to a collision while considering visibility and
interactions, and consequently integrate conditions that apply these criteria
into an optimization-based motion planner. We show the generality of our method
by validating it in several distinct urban scenarios
- …