14 research outputs found
An information-theoretic on-line update principle for perception-action coupling
Inspired by findings of sensorimotor coupling in humans and animals, there
has recently been a growing interest in the interaction between action and
perception in robotic systems [Bogh et al., 2016]. Here we consider perception
and action as two serial information channels with limited
information-processing capacity. We follow [Genewein et al., 2015] and
formulate a constrained optimization problem that maximizes utility under
limited information-processing capacity in the two channels. As a solution we
obtain an optimal perceptual channel and an optimal action channel that are
coupled such that perceptual information is optimized with respect to
downstream processing in the action module. The main novelty of this study is
that we propose an online optimization procedure to find bounded-optimal
perception and action channels in parameterized serial perception-action
systems. In particular, we implement the perceptual channel as a multi-layer
neural network and the action channel as a multinomial distribution. We
illustrate our method in a NAO robot simulator with a simplified cup lifting
task.Comment: 8 pages, 2017 IEEE/RSJ International Conference on Intelligent Robots
and Systems (IROS
A Tutorial on Sparse Gaussian Processes and Variational Inference
Gaussian processes (GPs) provide a framework for Bayesian inference that can
offer principled uncertainty estimates for a large range of problems. For
example, if we consider regression problems with Gaussian likelihoods, a GP
model enjoys a posterior in closed form. However, identifying the posterior GP
scales cubically with the number of training examples and requires to store all
examples in memory. In order to overcome these obstacles, sparse GPs have been
proposed that approximate the true posterior GP with pseudo-training examples.
Importantly, the number of pseudo-training examples is user-defined and enables
control over computational and memory complexity. In the general case, sparse
GPs do not enjoy closed-form solutions and one has to resort to approximate
inference. In this context, a convenient choice for approximate inference is
variational inference (VI), where the problem of Bayesian inference is cast as
an optimization problem -- namely, to maximize a lower bound of the log
marginal likelihood. This paves the way for a powerful and versatile framework,
where pseudo-training examples are treated as optimization arguments of the
approximate posterior that are jointly identified together with hyperparameters
of the generative model (i.e. prior and likelihood). The framework can
naturally handle a wide scope of supervised learning problems, ranging from
regression with heteroscedastic and non-Gaussian likelihoods to classification
problems with discrete labels, but also multilabel problems. The purpose of
this tutorial is to provide access to the basic matter for readers without
prior knowledge in both GPs and VI. A proper exposition to the subject enables
also access to more recent advances (like importance-weighted VI as well as
interdomain, multioutput and deep GPs) that can serve as an inspiration for new
research ideas
Uncertainty in Neural Networks: Approximately Bayesian Ensembling
Understanding the uncertainty of a neural network's (NN) predictions is
essential for many purposes. The Bayesian framework provides a principled
approach to this, however applying it to NNs is challenging due to large
numbers of parameters and data. Ensembling NNs provides an easily
implementable, scalable method for uncertainty quantification, however, it has
been criticised for not being Bayesian. This work proposes one modification to
the usual process that we argue does result in approximate Bayesian inference;
regularising parameters about values drawn from a distribution which can be set
equal to the prior. A theoretical analysis of the procedure in a simplified
setting suggests the recovered posterior is centred correctly but tends to have
an underestimated marginal variance, and overestimated correlation. However,
two conditions can lead to exact recovery. We argue that these conditions are
partially present in NNs. Empirical evaluations demonstrate it has an advantage
over standard ensembling, and is competitive with variational methods.The lead author was funded through EPSRC (EP/N509620/1) and partially accommodated by the Alan Turing Institute
Multiplexed broadband beam steering system utilizing high speed MEMS mirrors
We present a beam steering system based on micro-electromechanical systems
technology that features high speed steering of multiple laser beams over a
broad wavelength range. By utilizing high speed micromirrors with a broadband
metallic coating, our system has the flexibility to simultaneously incorporate
a wide range of wavelengths and multiple beams. We demonstrate reconfiguration
of two independent beams at different wavelengths (780 and 635 nm) across a
common 5x5 array with 4 us settling time. Full simulation of the optical system
provides insights on the scalability of the system. Such a system can provide a
versatile tool for applications where fast laser multiplexing is necessary.Comment: 11 pages, 6 figures, submitte