11,056 research outputs found
Online Deep Learning for Improved Trajectory Tracking of Unmanned Aerial Vehicles Using Expert Knowledge
This work presents an online learning-based control method for improved
trajectory tracking of unmanned aerial vehicles using both deep learning and
expert knowledge. The proposed method does not require the exact model of the
system to be controlled, and it is robust against variations in system dynamics
as well as operational uncertainties. The learning is divided into two phases:
offline (pre-)training and online (post-)training. In the former, a
conventional controller performs a set of trajectories and, based on the
input-output dataset, the deep neural network (DNN)-based controller is
trained. In the latter, the trained DNN, which mimics the conventional
controller, controls the system. Unlike the existing papers in the literature,
the network is still being trained for different sets of trajectories which are
not used in the training phase of DNN. Thanks to the rule-base, which contains
the expert knowledge, the proposed framework learns the system dynamics and
operational uncertainties in real-time. The experimental results show that the
proposed online learning-based approach gives better trajectory tracking
performance when compared to the only offline trained network.Comment: corrected version accepted for ICRA 201
How deep is deep enough? -- Quantifying class separability in the hidden layers of deep neural networks
Deep neural networks typically outperform more traditional machine learning
models in their ability to classify complex data, and yet is not clear how the
individual hidden layers of a deep network contribute to the overall
classification performance. We thus introduce a Generalized Discrimination
Value (GDV) that measures, in a non-invasive manner, how well different data
classes separate in each given network layer. The GDV can be used for the
automatic tuning of hyper-parameters, such as the width profile and the total
depth of a network. Moreover, the layer-dependent GDV(L) provides new insights
into the data transformations that self-organize during training: In the case
of multi-layer perceptrons trained with error backpropagation, we find that
classification of highly complex data sets requires a temporal {\em reduction}
of class separability, marked by a characteristic 'energy barrier' in the
initial part of the GDV(L) curve. Even more surprisingly, for a given data set,
the GDV(L) is running through a fixed 'master curve', independently from the
total number of network layers. Furthermore, applying the GDV to Deep Belief
Networks reveals that also unsupervised training with the Contrastive
Divergence method can systematically increase class separability over tens of
layers, even though the system does not 'know' the desired class labels. These
results indicate that the GDV may become a useful tool to open the black box of
deep learning
- …