4,606 research outputs found
Orthogonally Decoupled Variational Gaussian Processes
Gaussian processes (GPs) provide a powerful non-parametric framework for
reasoning over functions. Despite appealing theory, its superlinear
computational and memory complexities have presented a long-standing challenge.
State-of-the-art sparse variational inference methods trade modeling accuracy
against complexity. However, the complexities of these methods still scale
superlinearly in the number of basis functions, implying that that sparse GP
methods are able to learn from large datasets only when a small model is used.
Recently, a decoupled approach was proposed that removes the unnecessary
coupling between the complexities of modeling the mean and the covariance
functions of a GP. It achieves a linear complexity in the number of mean
parameters, so an expressive posterior mean function can be modeled. While
promising, this approach suffers from optimization difficulties due to
ill-conditioning and non-convexity. In this work, we propose an alternative
decoupled parametrization. It adopts an orthogonal basis in the mean function
to model the residues that cannot be learned by the standard coupled approach.
Therefore, our method extends, rather than replaces, the coupled approach to
achieve strictly better performance. This construction admits a straightforward
natural gradient update rule, so the structure of the information manifold that
is lost during decoupling can be leveraged to speed up learning. Empirically,
our algorithm demonstrates significantly faster convergence in multiple
experiments.Comment: Appearing NIPS 201
Meta Reinforcement Learning with Latent Variable Gaussian Processes
Learning from small data sets is critical in many practical applications
where data collection is time consuming or expensive, e.g., robotics, animal
experiments or drug design. Meta learning is one way to increase the data
efficiency of learning algorithms by generalizing learned concepts from a set
of training tasks to unseen, but related, tasks. Often, this relationship
between tasks is hard coded or relies in some other way on human expertise. In
this paper, we frame meta learning as a hierarchical latent variable model and
infer the relationship between tasks automatically from data. We apply our
framework in a model-based reinforcement learning setting and show that our
meta-learning model effectively generalizes to novel tasks by identifying how
new tasks relate to prior ones from minimal data. This results in up to a 60%
reduction in the average interaction time needed to solve tasks compared to
strong baselines.Comment: 11 pages, 7 figure
- …