5,112 research outputs found
Recommended from our members
Computationally Efficient Convolved Multiple Output Gaussian Processes
Recently there has been an increasing interest in methods that deal with multiple outputs. This has been motivated partly by frameworks like multitask learning, multisensor networks or structured output data. From a Gaussian processes perspective, the problem reduces to specifying an appropriate covariance function that, whilst being positive semi-definite, captures the dependencies between all the data points and across all the outputs. One approach to account for non-trivial correlations between outputs employs convolution processes. Under a latent function interpretation of the convolution transform we establish dependencies between output variables. The main drawbacks of this approach are the associated computational and storage demands. In this paper we address these issues. We present different sparse approximations for dependent output Gaussian processes constructed through the convolution formalism. We exploit the conditional independencies present naturally in the model. This leads to a form of the covariance similar in spirit to the so called PITC and FITC approximations for a single output. We show experimental results with synthetic and real data, in particular, we show results in pollution prediction, school exams score prediction and gene expression data
Fast Pixel Space Convolution for CMB Surveys with Asymmetric Beams and Complex Scan Strategies: FEBeCoP
Precise measurement of the angular power spectrum of the Cosmic Microwave
Background (CMB) temperature and polarization anisotropy can tightly constrain
many cosmological models and parameters. However, accurate measurements can
only be realized in practice provided all major systematic effects have been
taken into account. Beam asymmetry, coupled with the scan strategy, is a major
source of systematic error in scanning CMB experiments such as Planck, the
focus of our current interest. We envision Monte Carlo methods to rigorously
study and account for the systematic effect of beams in CMB analysis. Toward
that goal, we have developed a fast pixel space convolution method that can
simulate sky maps observed by a scanning instrument, taking into account real
beam shapes and scan strategy. The essence is to pre-compute the "effective
beams" using a computer code, "Fast Effective Beam Convolution in Pixel space"
(FEBeCoP), that we have developed for the Planck mission. The code computes
effective beams given the focal plane beam characteristics of the Planck
instrument and the full history of actual satellite pointing, and performs very
fast convolution of sky signals using the effective beams. In this paper, we
describe the algorithm and the computational scheme that has been implemented.
We also outline a few applications of the effective beams in the precision
analysis of Planck data, for characterizing the CMB anisotropy and for
detecting and measuring properties of point sources.Comment: 26 pages, 15 figures. New subsection on beam/PSF statistics, new and
better figures, more explicit algebra for polarized beams, added explanatory
text at many places following referees comments [Accepted for publication in
ApJS
Infinite Mixtures of Multivariate Gaussian Processes
This paper presents a new model called infinite mixtures of multivariate
Gaussian processes, which can be used to learn vector-valued functions and
applied to multitask learning. As an extension of the single multivariate
Gaussian process, the mixture model has the advantages of modeling multimodal
data and alleviating the computationally cubic complexity of the multivariate
Gaussian process. A Dirichlet process prior is adopted to allow the (possibly
infinite) number of mixture components to be automatically inferred from
training data, and Markov chain Monte Carlo sampling techniques are used for
parameter and latent variable inference. Preliminary experimental results on
multivariate regression show the feasibility of the proposed model.Comment: Proceedings of the International Conference on Machine Learning and
Cybernetics, 2013, pages 1011-101
Fast Kernel Approximations for Latent Force Models and Convolved Multiple-Output Gaussian processes
A latent force model is a Gaussian process with a covariance function
inspired by a differential operator. Such covariance function is obtained by
performing convolution integrals between Green's functions associated to the
differential operators, and covariance functions associated to latent
functions. In the classical formulation of latent force models, the covariance
functions are obtained analytically by solving a double integral, leading to
expressions that involve numerical solutions of different types of error
functions. In consequence, the covariance matrix calculation is considerably
expensive, because it requires the evaluation of one or more of these error
functions. In this paper, we use random Fourier features to approximate the
solution of these double integrals obtaining simpler analytical expressions for
such covariance functions. We show experimental results using ordinary
differential operators and provide an extension to build general kernel
functions for convolved multiple output Gaussian processes.Comment: 10 pages, 4 figures, accepted by UAI 201
Variational Inference of Joint Models using Multivariate Gaussian Convolution Processes
We present a non-parametric prognostic framework for individualized event
prediction based on joint modeling of both longitudinal and time-to-event data.
Our approach exploits a multivariate Gaussian convolution process (MGCP) to
model the evolution of longitudinal signals and a Cox model to map
time-to-event data with longitudinal data modeled through the MGCP. Taking
advantage of the unique structure imposed by convolved processes, we provide a
variational inference framework to simultaneously estimate parameters in the
joint MGCP-Cox model. This significantly reduces computational complexity and
safeguards against model overfitting. Experiments on synthetic and real world
data show that the proposed framework outperforms state-of-the art approaches
built on two-stage inference and strong parametric assumptions
Efficient Bayesian Nonparametric Modelling of Structured Point Processes
This paper presents a Bayesian generative model for dependent Cox point
processes, alongside an efficient inference scheme which scales as if the point
processes were modelled independently. We can handle missing data naturally,
infer latent structure, and cope with large numbers of observed processes. A
further novel contribution enables the model to work effectively in higher
dimensional spaces. Using this method, we achieve vastly improved predictive
performance on both 2D and 1D real data, validating our structured approach.Comment: Presented at UAI 2014. Bibtex: @inproceedings{structcoxpp14_UAI,
Author = {Tom Gunter and Chris Lloyd and Michael A. Osborne and Stephen J.
Roberts}, Title = {Efficient Bayesian Nonparametric Modelling of Structured
Point Processes}, Booktitle = {Uncertainty in Artificial Intelligence (UAI)},
Year = {2014}
- …