9,094 research outputs found
Combining Optimal Control and Learning for Visual Navigation in Novel Environments
Model-based control is a popular paradigm for robot navigation because it can
leverage a known dynamics model to efficiently plan robust robot trajectories.
However, it is challenging to use model-based methods in settings where the
environment is a priori unknown and can only be observed partially through
on-board sensors on the robot. In this work, we address this short-coming by
coupling model-based control with learning-based perception. The learning-based
perception module produces a series of waypoints that guide the robot to the
goal via a collision-free path. These waypoints are used by a model-based
planner to generate a smooth and dynamically feasible trajectory that is
executed on the physical system using feedback control. Our experiments in
simulated real-world cluttered environments and on an actual ground vehicle
demonstrate that the proposed approach can reach goal locations more reliably
and efficiently in novel environments as compared to purely geometric
mapping-based or end-to-end learning-based alternatives. Our approach does not
rely on detailed explicit 3D maps of the environment, works well with low frame
rates, and generalizes well from simulation to the real world. Videos
describing our approach and experiments are available on the project website.Comment: Project website: https://vtolani95.github.io/WayPtNav
CIRL: Controllable Imitative Reinforcement Learning for Vision-based Self-driving
Autonomous urban driving navigation with complex multi-agent dynamics is
under-explored due to the difficulty of learning an optimal driving policy. The
traditional modular pipeline heavily relies on hand-designed rules and the
pre-processing perception system while the supervised learning-based models are
limited by the accessibility of extensive human experience. We present a
general and principled Controllable Imitative Reinforcement Learning (CIRL)
approach which successfully makes the driving agent achieve higher success
rates based on only vision inputs in a high-fidelity car simulator. To
alleviate the low exploration efficiency for large continuous action space that
often prohibits the use of classical RL on challenging real tasks, our CIRL
explores over a reasonably constrained action space guided by encoded
experiences that imitate human demonstrations, building upon Deep Deterministic
Policy Gradient (DDPG). Moreover, we propose to specialize adaptive policies
and steering-angle reward designs for different control signals (i.e. follow,
straight, turn right, turn left) based on the shared representations to improve
the model capability in tackling with diverse cases. Extensive experiments on
CARLA driving benchmark demonstrate that CIRL substantially outperforms all
previous methods in terms of the percentage of successfully completed episodes
on a variety of goal-directed driving tasks. We also show its superior
generalization capability in unseen environments. To our knowledge, this is the
first successful case of the learned driving policy through reinforcement
learning in the high-fidelity simulator, which performs better-than supervised
imitation learning.Comment: To appear in ECCV 201
Probabilistic Prediction of Interactive Driving Behavior via Hierarchical Inverse Reinforcement Learning
Autonomous vehicles (AVs) are on the road. To safely and efficiently interact
with other road participants, AVs have to accurately predict the behavior of
surrounding vehicles and plan accordingly. Such prediction should be
probabilistic, to address the uncertainties in human behavior. Such prediction
should also be interactive, since the distribution over all possible
trajectories of the predicted vehicle depends not only on historical
information, but also on future plans of other vehicles that interact with it.
To achieve such interaction-aware predictions, we propose a probabilistic
prediction approach based on hierarchical inverse reinforcement learning (IRL).
First, we explicitly consider the hierarchical trajectory-generation process of
human drivers involving both discrete and continuous driving decisions. Based
on this, the distribution over all future trajectories of the predicted vehicle
is formulated as a mixture of distributions partitioned by the discrete
decisions. Then we apply IRL hierarchically to learn the distributions from
real human demonstrations. A case study for the ramp-merging driving scenario
is provided. The quantitative results show that the proposed approach can
accurately predict both the discrete driving decisions such as yield or pass as
well as the continuous trajectories.Comment: ITSC201
Pedestrian Dominance Modeling for Socially-Aware Robot Navigation
We present a Pedestrian Dominance Model (PDM) to identify the dominance
characteristics of pedestrians for robot navigation. Through a perception study
on a simulated dataset of pedestrians, PDM models the perceived dominance
levels of pedestrians with varying motion behaviors corresponding to
trajectory, speed, and personal space. At runtime, we use PDM to identify the
dominance levels of pedestrians to facilitate socially-aware navigation for the
robots. PDM can predict dominance levels from trajectories with ~85% accuracy.
Prior studies in psychology literature indicate that when interacting with
humans, people are more comfortable around people that exhibit complementary
movement behaviors. Our algorithm leverages this by enabling the robots to
exhibit complementing responses to pedestrian dominance. We also present an
application of PDM for generating dominance-based collision-avoidance behaviors
in the navigation of autonomous vehicles among pedestrians. We demonstrate the
benefits of our algorithm for robots navigating among tens of pedestrians in
simulated environments.Comment: To Appear in ICRA 201
Learning to Navigate: Exploiting Deep Networks to Inform Sample-Based Planning During Vision-Based Navigation
Recent applications of deep learning to navigation have generated end-to-end
navigation solutions whereby visual sensor input is mapped to control signals
or to motion primitives. The resulting visual navigation strategies work very
well at collision avoidance and have performance that matches traditional
reactive navigation algorithms while operating in real-time. It is accepted
that these solutions cannot provide the same level of performance as a global
planner. However, it is less clear how such end-to-end systems should be
integrated into a full navigation pipeline. We evaluate the typical end-to-end
solution within a full navigation pipeline in order to expose its weaknesses.
Doing so illuminates how to better integrate deep learning methods into the
navigation pipeline. In particular, we show that they are an efficient means to
provide informed samples for sample-based planners. Controlled simulations with
comparison against traditional planners show that the number of samples can be
reduced by an order of magnitude while preserving navigation performance.
Implementation on a mobile robot matches the simulated performance outcomes.Comment: 7 pages, 6 figure
Deep Convolutional Neural Network-Based Autonomous Drone Navigation
This paper presents a novel approach for aerial drone autonomous navigation
along predetermined paths using only visual input form an onboard camera and
without reliance on a Global Positioning System (GPS). It is based on using a
deep Convolutional Neural Network (CNN) combined with a regressor to output the
drone steering commands. Furthermore, multiple auxiliary navigation paths that
form a navigation envelope are used for data augmentation to make the system
adaptable to real-life deployment scenarios. The approach is suitable for
automating drone navigation in applications that exhibit regular trips or
visits to same locations such as environmental and desertification monitoring,
parcel/aid delivery and drone-based wireless internet delivery. In this case,
the proposed algorithm replaces human operators, enhances accuracy of GPS-based
map navigation, alleviates problems related to GPS-spoofing and enables
navigation in GPS-denied environments. Our system is tested in two scenarios
using the Unreal Engine-based AirSim plugin for drone simulation with promising
results of average cross track distance less than 1.4 meters and mean waypoints
minimum distance of less than 1 meter
Deep Imitative Models for Flexible Inference, Planning, and Control
Imitation Learning (IL) is an appealing approach to learn desirable
autonomous behavior. However, directing IL to achieve arbitrary goals is
difficult. In contrast, planning-based algorithms use dynamics models and
reward functions to achieve goals. Yet, reward functions that evoke desirable
behavior are often difficult to specify. In this paper, we propose Imitative
Models to combine the benefits of IL and goal-directed planning. Imitative
Models are probabilistic predictive models of desirable behavior able to plan
interpretable expert-like trajectories to achieve specified goals. We derive
families of flexible goal objectives, including constrained goal regions,
unconstrained goal sets, and energy-based goals. We show that our method can
use these objectives to successfully direct behavior. Our method substantially
outperforms six IL approaches and a planning-based approach in a dynamic
simulated autonomous driving task, and is efficiently learned from expert
demonstrations without online data collection. We also show our approach is
robust to poorly specified goals, such as goals on the wrong side of the road
An Efficient Reachability-Based Framework for Provably Safe Autonomous Navigation in Unknown Environments
Real-world autonomous vehicles often operate in a priori unknown
environments. Since most of these systems are safety-critical, it is important
to ensure they operate safely in the face of environment uncertainty, such as
unseen obstacles. Current safety analysis tools enable autonomous systems to
reason about safety given full information about the state of the environment a
priori. However, these tools do not scale well to scenarios where the
environment is being sensed in real time, such as during navigation tasks. In
this work, we propose a novel, real-time safety analysis method based on
Hamilton-Jacobi reachability that provides strong safety guarantees despite
environment uncertainty. Our safety method is planner-agnostic and provides
guarantees for a variety of mapping sensors. We demonstrate our approach in
simulation and in hardware to provide safety guarantees around a
state-of-the-art vision-based, learning-based planner
Occupancy Map Prediction Using Generative and Fully Convolutional Networks for Vehicle Navigation
Fast, collision-free motion through unknown environments remains a
challenging problem for robotic systems. In these situations, the robot's
ability to reason about its future motion is often severely limited by sensor
field of view (FOV). By contrast, biological systems routinely make decisions
by taking into consideration what might exist beyond their FOV based on prior
experience. In this paper, we present an approach for predicting occupancy map
representations of sensor data for future robot motions using deep neural
networks. We evaluate several deep network architectures, including purely
generative and adversarial models. Testing on both simulated and real
environments we demonstrated performance both qualitatively and quantitatively,
with SSIM similarity measure up to 0.899. We showed that it is possible to make
predictions about occupied space beyond the physical robot's FOV from simulated
training data. In the future, this method will allow robots to navigate through
unknown environments in a faster, safer manner.Comment: 7 page
Artificial Intelligence-Based Techniques for Emerging Robotics Communication: A Survey and Future Perspectives
This paper reviews the current development of artificial intelligence (AI)
techniques for the application area of robot communication. The study of the
control and operation of multiple robots collaboratively toward a common goal
is fast growing. Communication among members of a robot team and even including
humans is becoming essential in many real-world applications. The survey
focuses on the AI techniques for robot communication to enhance the
communication capability of the multi-robot team, making more complex
activities, taking an appreciated decision, taking coordinated action, and
performing their tasks efficiently.Comment: 11 pages, 6 figure
- …