1,264 research outputs found
Multi-view Regularized Gaussian Processes
Gaussian processes (GPs) have been proven to be powerful tools in various
areas of machine learning. However, there are very few applications of GPs in
the scenario of multi-view learning. In this paper, we present a new GP model
for multi-view learning. Unlike existing methods, it combines multiple views by
regularizing marginal likelihood with the consistency among the posterior
distributions of latent functions from different views. Moreover, we give a
general point selection scheme for multi-view learning and improve the proposed
model by this criterion. Experimental results on multiple real world data sets
have verified the effectiveness of the proposed model and witnessed the
performance improvement through employing this novel point selection scheme
Expert Gate: Lifelong Learning with a Network of Experts
In this paper we introduce a model of lifelong learning, based on a Network
of Experts. New tasks / experts are learned and added to the model
sequentially, building on what was learned before. To ensure scalability of
this process,data from previous tasks cannot be stored and hence is not
available when learning a new task. A critical issue in such context, not
addressed in the literature so far, relates to the decision which expert to
deploy at test time. We introduce a set of gating autoencoders that learn a
representation for the task at hand, and, at test time, automatically forward
the test sample to the relevant expert. This also brings memory efficiency as
only one expert network has to be loaded into memory at any given time.
Further, the autoencoders inherently capture the relatedness of one task to
another, based on which the most relevant prior model to be used for training a
new expert, with finetuning or learning without-forgetting, can be selected. We
evaluate our method on image classification and video prediction problems.Comment: CVPR 2017 pape
- …