2,840 research outputs found
Nonparametric Bayesian Mixed-effect Model: a Sparse Gaussian Process Approach
Multi-task learning models using Gaussian processes (GP) have been developed
and successfully applied in various applications. The main difficulty with this
approach is the computational cost of inference using the union of examples
from all tasks. Therefore sparse solutions, that avoid using the entire data
directly and instead use a set of informative "representatives" are desirable.
The paper investigates this problem for the grouped mixed-effect GP model where
each individual response is given by a fixed-effect, taken from one of a set of
unknown groups, plus a random individual effect function that captures
variations among individuals. Such models have been widely used in previous
work but no sparse solutions have been developed. The paper presents the first
sparse solution for such problems, showing how the sparse approximation can be
obtained by maximizing a variational lower bound on the marginal likelihood,
generalizing ideas from single-task Gaussian processes to handle the
mixed-effect model as well as grouping. Experiments using artificial and real
data validate the approach showing that it can recover the performance of
inference with the full sample, that it outperforms baseline methods, and that
it outperforms state of the art sparse solutions for other multi-task GP
formulations.Comment: Preliminary version appeared in ECML201
Minimizing Negative Transfer of Knowledge in Multivariate Gaussian Processes: A Scalable and Regularized Approach
Recently there has been an increasing interest in the multivariate Gaussian
process (MGP) which extends the Gaussian process (GP) to deal with multiple
outputs. One approach to construct the MGP and account for non-trivial
commonalities amongst outputs employs a convolution process (CP). The CP is
based on the idea of sharing latent functions across several convolutions.
Despite the elegance of the CP construction, it provides new challenges that
need yet to be tackled. First, even with a moderate number of outputs, model
building is extremely prohibitive due to the huge increase in computational
demands and number of parameters to be estimated. Second, the negative transfer
of knowledge may occur when some outputs do not share commonalities. In this
paper we address these issues. We propose a regularized pairwise modeling
approach for the MGP established using CP. The key feature of our approach is
to distribute the estimation of the full multivariate model into a group of
bivariate GPs which are individually built. Interestingly pairwise modeling
turns out to possess unique characteristics, which allows us to tackle the
challenge of negative transfer through penalizing the latent function that
facilitates information sharing in each bivariate model. Predictions are then
made through combining predictions from the bivariate models within a Bayesian
framework. The proposed method has excellent scalability when the number of
outputs is large and minimizes the negative transfer of knowledge between
uncorrelated outputs. Statistical guarantees for the proposed method are
studied and its advantageous features are demonstrated through numerical
studies
Weakly-supervised Multi-output Regression via Correlated Gaussian Processes
Multi-output regression seeks to infer multiple latent functions using data
from multiple groups/sources while accounting for potential between-group
similarities. In this paper, we consider multi-output regression under a
weakly-supervised setting where a subset of data points from multiple groups
are unlabeled. We use dependent Gaussian processes for multiple outputs
constructed by convolutions with shared latent processes. We introduce
hyperpriors for the multinomial probabilities of the unobserved labels and
optimize the hyperparameters which we show improves estimation. We derive two
variational bounds: (i) a modified variational bound for fast and stable
convergence in model inference, (ii) a scalable variational bound that is
amenable to stochastic optimization. We use experiments on synthetic and
real-world data to show that the proposed model outperforms state-of-the-art
models with more accurate estimation of multiple latent functions and
unobserved labels
Recommended from our members
Computationally Efficient Convolved Multiple Output Gaussian Processes
Recently there has been an increasing interest in methods that deal with multiple outputs. This has been motivated partly by frameworks like multitask learning, multisensor networks or structured output data. From a Gaussian processes perspective, the problem reduces to specifying an appropriate covariance function that, whilst being positive semi-definite, captures the dependencies between all the data points and across all the outputs. One approach to account for non-trivial correlations between outputs employs convolution processes. Under a latent function interpretation of the convolution transform we establish dependencies between output variables. The main drawbacks of this approach are the associated computational and storage demands. In this paper we address these issues. We present different sparse approximations for dependent output Gaussian processes constructed through the convolution formalism. We exploit the conditional independencies present naturally in the model. This leads to a form of the covariance similar in spirit to the so called PITC and FITC approximations for a single output. We show experimental results with synthetic and real data, in particular, we show results in pollution prediction, school exams score prediction and gene expression data
- …