137 research outputs found
Benchmarking Cerebellar Control
Cerebellar models have long been advocated as viable models
for robot dynamics control. Building on an increasing insight
in and knowledge of the biological cerebellum, many models have been
greatly refined, of which some computational models have emerged
with useful properties with respect to robot dynamics control.
Looking at the application side, however, there is a totally different
picture. Not only is there not one robot on the market which uses
anything remotely connected with cerebellar control, but even in
research labs most testbeds for cerebellar models are restricted to
toy problems. Such applications hardly ever exceed the complexity of
a 2 DoF simulated robot arm; a task which is hardly representative for
the field of robotics, or relates to realistic applications.
In order to bring the amalgamation of the two fields forwards, we
advocate the use of a set of robotics benchmarks, on which existing
and new computational cerebellar models can be comparatively tested.
It is clear that the traditional approach to solve robotics dynamics
loses ground with the advancing complexity of robotic structures;
there is a desire for adaptive methods which can compete as traditional
control methods do for traditional robots.
In this paper we try to lay down the successes and problems in the
fields of cerebellar modelling as well as robot dynamics control.
By analyzing the common ground, a set of benchmarks is suggested
which may serve as typical robot applications for cerebellar models
Multi-Source Neural Variational Inference
Learning from multiple sources of information is an important problem in
machine-learning research. The key challenges are learning representations and
formulating inference methods that take into account the complementarity and
redundancy of various information sources. In this paper we formulate a
variational autoencoder based multi-source learning framework in which each
encoder is conditioned on a different information source. This allows us to
relate the sources via the shared latent variables by computing divergence
measures between individual source's posterior approximations. We explore a
variety of options to learn these encoders and to integrate the beliefs they
compute into a consistent posterior approximation. We visualise learned beliefs
on a toy dataset and evaluate our methods for learning shared representations
and structured output prediction, showing trade-offs of learning separate
encoders for each information source. Furthermore, we demonstrate how conflict
detection and redundancy can increase robustness of inference in a multi-source
setting.Comment: AAAI 2019, Association for the Advancement of Artificial Intelligence
(AAAI) 201
- …