17 research outputs found
Online Deep Learning for Improved Trajectory Tracking of Unmanned Aerial Vehicles Using Expert Knowledge
This work presents an online learning-based control method for improved
trajectory tracking of unmanned aerial vehicles using both deep learning and
expert knowledge. The proposed method does not require the exact model of the
system to be controlled, and it is robust against variations in system dynamics
as well as operational uncertainties. The learning is divided into two phases:
offline (pre-)training and online (post-)training. In the former, a
conventional controller performs a set of trajectories and, based on the
input-output dataset, the deep neural network (DNN)-based controller is
trained. In the latter, the trained DNN, which mimics the conventional
controller, controls the system. Unlike the existing papers in the literature,
the network is still being trained for different sets of trajectories which are
not used in the training phase of DNN. Thanks to the rule-base, which contains
the expert knowledge, the proposed framework learns the system dynamics and
operational uncertainties in real-time. The experimental results show that the
proposed online learning-based approach gives better trajectory tracking
performance when compared to the only offline trained network.Comment: corrected version accepted for ICRA 201
DPC-Net: Deep Pose Correction for Visual Localization
We present a novel method to fuse the power of deep networks with the
computational efficiency of geometric and probabilistic localization
algorithms. In contrast to other methods that completely replace a classical
visual estimator with a deep network, we propose an approach that uses a
convolutional neural network to learn difficult-to-model corrections to the
estimator from ground-truth training data. To this end, we derive a novel loss
function for learning SE(3) corrections based on a matrix Lie groups approach,
with a natural formulation for balancing translation and rotation errors. We
use this loss to train a Deep Pose Correction network (DPC-Net) that predicts
corrections for a particular estimator, sensor and environment. Using the KITTI
odometry dataset, we demonstrate significant improvements to the accuracy of a
computationally-efficient sparse stereo visual odometry pipeline, that render
it as accurate as a modern computationally-intensive dense estimator. Further,
we show how DPC-Net can be used to mitigate the effect of poorly calibrated
lens distortion parameters.Comment: In IEEE Robotics and Automation Letters (RA-L) and presented at the
IEEE International Conference on Robotics and Automation (ICRA'18), Brisbane,
Australia, May 21-25, 201
How to Train a CAT: Learning Canonical Appearance Transformations for Direct Visual Localization Under Illumination Change
Direct visual localization has recently enjoyed a resurgence in popularity
with the increasing availability of cheap mobile computing power. The
competitive accuracy and robustness of these algorithms compared to
state-of-the-art feature-based methods, as well as their natural ability to
yield dense maps, makes them an appealing choice for a variety of mobile
robotics applications. However, direct methods remain brittle in the face of
appearance change due to their underlying assumption of photometric
consistency, which is commonly violated in practice. In this paper, we propose
to mitigate this problem by training deep convolutional encoder-decoder models
to transform images of a scene such that they correspond to a previously-seen
canonical appearance. We validate our method in multiple environments and
illumination conditions using high-fidelity synthetic RGB-D datasets, and
integrate the trained models into a direct visual localization pipeline,
yielding improvements in visual odometry (VO) accuracy through time-varying
illumination conditions, as well as improved metric relocalization performance
under illumination change, where conventional methods normally fail. We further
provide a preliminary investigation of transfer learning from synthetic to real
environments in a localization context. An open-source implementation of our
method using PyTorch is available at https://github.com/utiasSTARS/cat-net.Comment: In IEEE Robotics and Automation Letters (RA-L) and presented at the
IEEE International Conference on Robotics and Automation (ICRA'18), Brisbane,
Australia, May 21-25, 201
Multi-Robot Transfer Learning: A Dynamical System Perspective
Multi-robot transfer learning allows a robot to use data generated by a
second, similar robot to improve its own behavior. The potential advantages are
reducing the time of training and the unavoidable risks that exist during the
training phase. Transfer learning algorithms aim to find an optimal transfer
map between different robots. In this paper, we investigate, through a
theoretical study of single-input single-output (SISO) systems, the properties
of such optimal transfer maps. We first show that the optimal transfer learning
map is, in general, a dynamic system. The main contribution of the paper is to
provide an algorithm for determining the properties of this optimal dynamic map
including its order and regressors (i.e., the variables it depends on). The
proposed algorithm does not require detailed knowledge of the robots' dynamics,
but relies on basic system properties easily obtainable through simple
experimental tests. We validate the proposed algorithm experimentally through
an example of transfer learning between two different quadrotor platforms.
Experimental results show that an optimal dynamic map, with correct properties
obtained from our proposed algorithm, achieves 60-70% reduction of transfer
learning error compared to the cases when the data is directly transferred or
transferred using an optimal static map.Comment: 7 pages, 6 figures, accepted at the 2017 IEEE/RSJ International
Conference on Intelligent Robots and System
Knowledge Transfer Between Robots with Similar Dynamics for High-Accuracy Impromptu Trajectory Tracking
In this paper, we propose an online learning approach that enables the
inverse dynamics model learned for a source robot to be transferred to a target
robot (e.g., from one quadrotor to another quadrotor with different mass or
aerodynamic properties). The goal is to leverage knowledge from the source
robot such that the target robot achieves high-accuracy trajectory tracking on
arbitrary trajectories from the first attempt with minimal data recollection
and training. Most existing approaches for multi-robot knowledge transfer are
based on post-analysis of datasets collected from both robots. In this work, we
study the feasibility of impromptu transfer of models across robots by learning
an error prediction module online. In particular, we analytically derive the
form of the mapping to be learned by the online module for exact tracking,
propose an approach for characterizing similarity between robots, and use these
results to analyze the stability of the overall system. The proposed approach
is illustrated in simulation and verified experimentally on two different
quadrotors performing impromptu trajectory tracking tasks, where the quadrotors
are required to accurately track arbitrary hand-drawn trajectories from the
first attempt.Comment: European Control Conference (ECC) 201