34,721 research outputs found
Cautious NMPC with Gaussian Process Dynamics for Autonomous Miniature Race Cars
This paper presents an adaptive high performance control method for
autonomous miniature race cars. Racing dynamics are notoriously hard to model
from first principles, which is addressed by means of a cautious nonlinear
model predictive control (NMPC) approach that learns to improve its dynamics
model from data and safely increases racing performance. The approach makes use
of a Gaussian Process (GP) and takes residual model uncertainty into account
through a chance constrained formulation. We present a sparse GP approximation
with dynamically adjusting inducing inputs, enabling a real-time implementable
controller. The formulation is demonstrated in simulations, which show
significant improvement with respect to both lap time and constraint
satisfaction compared to an NMPC without model learning
Theoretical Interpretations and Applications of Radial Basis Function Networks
Medical applications usually used Radial Basis Function Networks just as Artificial Neural Networks. However, RBFNs are Knowledge-Based Networks that can be interpreted in several way: Artificial Neural Networks, Regularization Networks, Support Vector Machines, Wavelet Networks, Fuzzy Controllers, Kernel Estimators, Instanced-Based Learners. A survey of their interpretations and of their corresponding learning algorithms is provided as well as a brief survey on dynamic learning algorithms. RBFNs' interpretations can suggest applications that are particularly interesting in medical domains
Challenging Neural Dialogue Models with Natural Data: Memory Networks Fail on Incremental Phenomena
Natural, spontaneous dialogue proceeds incrementally on a word-by-word basis;
and it contains many sorts of disfluency such as mid-utterance/sentence
hesitations, interruptions, and self-corrections. But training data for machine
learning approaches to dialogue processing is often either cleaned-up or wholly
synthetic in order to avoid such phenomena. The question then arises of how
well systems trained on such clean data generalise to real spontaneous
dialogue, or indeed whether they are trainable at all on naturally occurring
dialogue data. To answer this question, we created a new corpus called bAbI+ by
systematically adding natural spontaneous incremental dialogue phenomena such
as restarts and self-corrections to the Facebook AI Research's bAbI dialogues
dataset. We then explore the performance of a state-of-the-art retrieval model,
MemN2N, on this more natural dataset. Results show that the semantic accuracy
of the MemN2N model drops drastically; and that although it is in principle
able to learn to process the constructions in bAbI+, it needs an impractical
amount of training data to do so. Finally, we go on to show that an
incremental, semantic parser -- DyLan -- shows 100% semantic accuracy on both
bAbI and bAbI+, highlighting the generalisation properties of linguistically
informed dialogue models.Comment: 9 pages, 3 figures, 2 tables. Accepted as a full paper for SemDial
201
- …