67,347 research outputs found
Gaussian processes for machine learning
Treballs Finals de Grau de Matemà tiques, Facultat de Matemà tiques, Universitat de Barcelona, Any: 2017, Director: Jordi Vitrià i Marca[en] I would like to thank my advisor, Jordi Vitrià Marca, for so much encouragement, support and feedback. Jordi was patient while I spent the first few months chasing half-baked ideas, and then gently suggested a series of notions which worked. It was wonderful working with someone who is always willing to help and whose dedication to science is inspiring
On the Geometry of Message Passing Algorithms for Gaussian Reciprocal Processes
Reciprocal processes are acausal generalizations of Markov processes
introduced by Bernstein in 1932. In the literature, a significant amount of
attention has been focused on developing dynamical models for reciprocal
processes. Recently, probabilistic graphical models for reciprocal processes
have been provided. This opens the way to the application of efficient
inference algorithms in the machine learning literature to solve the smoothing
problem for reciprocal processes. Such algorithms are known to converge if the
underlying graph is a tree. This is not the case for a reciprocal process,
whose associated graphical model is a single loop network. The contribution of
this paper is twofold. First, we introduce belief propagation for Gaussian
reciprocal processes. Second, we establish a link between convergence analysis
of belief propagation for Gaussian reciprocal processes and stability theory
for differentially positive systems.Comment: 15 pages; Typos corrected; This paper introduces belief propagation
for Gaussian reciprocal processes and extends the convergence analysis in
arXiv:1603.04419 to the Gaussian cas
Digital communication receivers using Gaussian processes for machine learning
We propose Gaussian processes (GPs) as a novel nonlinear receiver for digital communication systems. The GPs framework can be used to solve both classification (GPC) and regression (GPR) problems. The minimum mean squared error solution is the expectation of the transmitted symbol given the information at the receiver, which is a nonlinear function of the received symbols for discrete inputs. GPR can be presented as a nonlinear MMSE estimator and thus capable of achieving optimal performance from MMSE viewpoint. Also, the design of digital communication receivers can be viewed as a detection problem, for which GPC is specially suited as it assigns posterior probabilities to each transmitted symbol. We explore the suitability of GPs as nonlinear digital communication receivers. GPs are Bayesian machine learning tools that formulates a likelihood function for its hyperparameters, which can then be set optimally. GPs outperform state-of-the-art nonlinear machine learning approaches that prespecify their hyperparameters or rely on cross validation. We illustrate the advantages of GPs as digital communication receivers for linear and nonlinear channel models for short training sequences and compare them to state-of-the-art nonlinear machine learning tools, such as support vector machines
Scalable Hierarchical Gaussian Process Models for Regression and Pattern Classification
Gaussian processes, which are distributions over functions, are powerful nonparametric tools for the two major machine learning tasks: regression and classification. Both tasks are concerned with learning input-output mappings from example input-output pairs. In Gaussian process (GP) regression and classification, such mappings are modeled by Gaussian processes. In GP regression, the likelihood is Gaussian for continuous outputs, and hence closed-form solutions for prediction and model selection can be obtained. In GP classification, the likelihood is non-Gaussian for discrete/categorical outputs, and hence closed-form solutions are not available, and approximate inference methods must be resorted
- …