3,367 research outputs found
Regression with Linear Factored Functions
Many applications that use empirically estimated functions face a curse of
dimensionality, because the integrals over most function classes must be
approximated by sampling. This paper introduces a novel regression-algorithm
that learns linear factored functions (LFF). This class of functions has
structural properties that allow to analytically solve certain integrals and to
calculate point-wise products. Applications like belief propagation and
reinforcement learning can exploit these properties to break the curse and
speed up computation. We derive a regularized greedy optimization scheme, that
learns factored basis functions during training. The novel regression algorithm
performs competitively to Gaussian processes on benchmark tasks, and the
learned LFF functions are with 4-9 factored basis functions on average very
compact.Comment: Under review as conference paper at ECML/PKDD 201
Factored expectation propagation for input-output FHMM models in systems biology
We consider the problem of joint modelling of metabolic signals and gene
expression in systems biology applications. We propose an approach based on
input-output factorial hidden Markov models and propose a structured
variational inference approach to infer the structure and states of the model.
We start from the classical free form structured variational mean field
approach and use a expectation propagation to approximate the expectations
needed in the variational loop. We show that this corresponds to a factored
expectation constrained approximate inference. We validate our model through
extensive simulations and demonstrate its applicability on a real world
bacterial data set
Scalable approximate inference methods for Bayesian deep learning
This thesis proposes multiple methods for approximate inference in deep Bayesian neural networks split across three parts.
The first part develops a scalable Laplace approximation based on a block- diagonal Kronecker factored approximation of the Hessian. This approximation accounts for parameter correlations – overcoming the overly restrictive independence assumption of diagonal methods – while avoiding the quadratic scaling in the num- ber of parameters of the full Laplace approximation. The chapter further extends the method to online learning where datasets are observed one at a time. As the experiments demonstrate, modelling correlations between the parameters leads to improved performance over the diagonal approximation in uncertainty estimation and continual learning, in particular in the latter setting the improvements can be substantial.
The second part explores two parameter-efficient approaches for variational inference in neural networks, one based on factorised binary distributions over the weights, one extending ideas from sparse Gaussian processes to neural network weight matrices. The former encounters similar underfitting issues as mean-field Gaussian approaches, which can be alleviated by a MAP-style method in a hierarchi- cal model. The latter, based on an extension of Matheron’s rule to matrix normal distributions, achieves comparable uncertainty estimation performance to ensembles with the accuracy of a deterministic network while using only 25% of the number of parameters of a single ResNet-50.
The third part introduces TyXe, a probabilistic programming library built on top of Pyro to facilitate turning PyTorch neural networks into Bayesian ones. In contrast to existing frameworks, TyXe avoids introducing a layer abstraction, allowing it to support arbitrary architectures. This is demonstrated in a range of applications, from image classification with torchvision ResNets over node labelling with DGL graph neural networks to incorporating uncertainty into neural radiance fields with PyTorch3d
Learning to Represent Haptic Feedback for Partially-Observable Tasks
The sense of touch, being the earliest sensory system to develop in a human
body [1], plays a critical part of our daily interaction with the environment.
In order to successfully complete a task, many manipulation interactions
require incorporating haptic feedback. However, manually designing a feedback
mechanism can be extremely challenging. In this work, we consider manipulation
tasks that need to incorporate tactile sensor feedback in order to modify a
provided nominal plan. To incorporate partial observation, we present a new
framework that models the task as a partially observable Markov decision
process (POMDP) and learns an appropriate representation of haptic feedback
which can serve as the state for a POMDP model. The model, that is parametrized
by deep recurrent neural networks, utilizes variational Bayes methods to
optimize the approximate posterior. Finally, we build on deep Q-learning to be
able to select the optimal action in each state without access to a simulator.
We test our model on a PR2 robot for multiple tasks of turning a knob until it
clicks.Comment: IEEE International Conference on Robotics and Automation (ICRA), 201
- …