2,760 research outputs found
Uncertainty Aware Learning from Demonstrations in Multiple Contexts using Bayesian Neural Networks
Diversity of environments is a key challenge that causes learned robotic
controllers to fail due to the discrepancies between the training and
evaluation conditions. Training from demonstrations in various conditions can
mitigate---but not completely prevent---such failures. Learned controllers such
as neural networks typically do not have a notion of uncertainty that allows to
diagnose an offset between training and testing conditions, and potentially
intervene. In this work, we propose to use Bayesian Neural Networks, which have
such a notion of uncertainty. We show that uncertainty can be leveraged to
consistently detect situations in high-dimensional simulated and real robotic
domains in which the performance of the learned controller would be sub-par.
Also, we show that such an uncertainty based solution allows making an informed
decision about when to invoke a fallback strategy. One fallback strategy is to
request more data. We empirically show that providing data only when requested
results in increased data-efficiency.Comment: Copyright 20XX IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other work
AJILE Movement Prediction: Multimodal Deep Learning for Natural Human Neural Recordings and Video
Developing useful interfaces between brains and machines is a grand challenge
of neuroengineering. An effective interface has the capacity to not only
interpret neural signals, but predict the intentions of the human to perform an
action in the near future; prediction is made even more challenging outside
well-controlled laboratory experiments. This paper describes our approach to
detect and to predict natural human arm movements in the future, a key
challenge in brain computer interfacing that has never before been attempted.
We introduce the novel Annotated Joints in Long-term ECoG (AJILE) dataset;
AJILE includes automatically annotated poses of 7 upper body joints for four
human subjects over 670 total hours (more than 72 million frames), along with
the corresponding simultaneously acquired intracranial neural recordings. The
size and scope of AJILE greatly exceeds all previous datasets with movements
and electrocorticography (ECoG), making it possible to take a deep learning
approach to movement prediction. We propose a multimodal model that combines
deep convolutional neural networks (CNN) with long short-term memory (LSTM)
blocks, leveraging both ECoG and video modalities. We demonstrate that our
models are able to detect movements and predict future movements up to 800 msec
before movement initiation. Further, our multimodal movement prediction models
exhibit resilience to simulated ablation of input neural signals. We believe a
multimodal approach to natural neural decoding that takes context into account
is critical in advancing bioelectronic technologies and human neuroscience
- …