3 research outputs found
Object-oriented Targets for Visual Navigation using Rich Semantic Representations
When searching for an object humans navigate through a scene using semantic
information and spatial relationships. We look for an object using our
knowledge of its attributes and relationships with other objects to infer the
probable location. In this paper, we propose to tackle the visual navigation
problem using rich semantic representations of the observed scene and
object-oriented targets to train an agent. We show that both allows the agent
to generalize to new targets and unseen scene in a short amount of training
time.Comment: Presented at NIPS workshop (ViGIL
Autonomous Navigation in Complex Environments with Deep Multimodal Fusion Network
Autonomous navigation in complex environments is a crucial task in
time-sensitive scenarios such as disaster response or search and rescue.
However, complex environments pose significant challenges for autonomous
platforms to navigate due to their challenging properties: constrained narrow
passages, unstable pathway with debris and obstacles, or irregular geological
structures and poor lighting conditions. In this work, we propose a multimodal
fusion approach to address the problem of autonomous navigation in complex
environments such as collapsed cites, or natural caves. We first simulate the
complex environments in a physics-based simulation engine and collect a
large-scale dataset for training. We then propose a Navigation Multimodal
Fusion Network (NMFNet) which has three branches to effectively handle three
visual modalities: laser, RGB images, and point cloud data. The extensively
experimental results show that our NMFNet outperforms recent state of the art
by a fair margin while achieving real-time performance. We further show that
the use of multiple modalities is essential for autonomous navigation in
complex environments. Finally, we successfully deploy our network to both
simulated and real mobile robots.Comment: Accepted to IROS 202
Autonomous Navigation with Mobile Robots using Deep Learning and the Robot Operating System
Autonomous navigation is a long-standing field of robotics research, which
provides an essential capability for mobile robots to execute a series of tasks
on the same environments performed by human everyday. In this chapter, we
present a set of algorithms to train and deploy deep networks for autonomous
navigation of mobile robots using the Robot Operation System (ROS). We describe
three main steps to tackle this problem: i) collecting data in simulation
environments using ROS and Gazebo; ii) designing deep network for autonomous
navigation, and iii) deploying the learned policy on mobile robots in both
simulation and real-world. Theoretically, we present deep learning
architectures for robust navigation in normal environments (e.g., man-made
houses, roads) and complex environments (e.g., collapsed cities, or natural
caves). We further show that the use of visual modalities such as RGB, Lidar,
and point cloud is essential to improve the autonomy of mobile robots. Our
project website and demonstration video can be found at
https://sites.google.com/site/autonomousnavigationros.Comment: 18 pages. arXiv admin note: substantial text overlap with
arXiv:2007.1594