815 research outputs found

    Probabilistic Modelling of Uncertainty with Bayesian nonparametric Machine Learning

    Get PDF
    This thesis addresses the use of probabilistic predictive modelling and machine learning for quantifying uncertainties. Predictive modelling makes inferences of a process from observations obtained using computational modelling, simulation, or experimentation. This is often achieved using statistical machine learning models which predict the outcome as a function of variable predictors and given process observations. Towards this end Bayesian nonparametric regression is used, which is a highly flexible and probabilistic type of statistical model and provides a natural framework in which uncertainties can be included. The contributions of this thesis are threefold. Firstly, a novel approach to quantify parametric uncertainty in the Gaussian process latent variable model is presented, which is shown to improve predictive performance when compared with the commonly used variational expectation maximisation approach. Secondly, an emulator using manifold learning (local tangent space alignment) is developed for the purpose of dealing with problems where outputs lie in a high dimensional manifold. Using this, a framework is proposed to solve the forward problem for uncertainty quantification and applied to two fluid dynamics simulations. Finally, an enriched clustering model for generalised mixtures of Gaussian process experts is presented, which improves clustering, scaling with the number of covariates, and prediction when compared with what is known as the alternative model. This is then applied to a study of Alzheimer’s disease, with the aim of improving prediction of disease progression

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Clustering based on Mixtures of Sparse Gaussian Processes

    Full text link
    Creating low dimensional representations of a high dimensional data set is an important component in many machine learning applications. How to cluster data using their low dimensional embedded space is still a challenging problem in machine learning. In this article, we focus on proposing a joint formulation for both clustering and dimensionality reduction. When a probabilistic model is desired, one possible solution is to use the mixture models in which both cluster indicator and low dimensional space are learned. Our algorithm is based on a mixture of sparse Gaussian processes, which is called Sparse Gaussian Process Mixture Clustering (SGP-MIC). The main advantages to our approach over existing methods are that the probabilistic nature of this model provides more advantages over existing deterministic methods, it is straightforward to construct non-linear generalizations of the model, and applying a sparse model and an efficient variational EM approximation help to speed up the algorithm

    A survey on Bayesian nonparametric learning

    Full text link
    © 2019 Copyright held by the owner/author(s). Publication rights licensed to ACM. Bayesian (machine) learning has been playing a significant role in machine learning for a long time due to its particular ability to embrace uncertainty, encode prior knowledge, and endow interpretability. On the back of Bayesian learning's great success, Bayesian nonparametric learning (BNL) has emerged as a force for further advances in this field due to its greater modelling flexibility and representation power. Instead of playing with the fixed-dimensional probabilistic distributions of Bayesian learning, BNL creates a new “game” with infinite-dimensional stochastic processes. BNL has long been recognised as a research subject in statistics, and, to date, several state-of-the-art pilot studies have demonstrated that BNL has a great deal of potential to solve real-world machine-learning tasks. However, despite these promising results, BNL has not created a huge wave in the machine-learning community. Esotericism may account for this. The books and surveys on BNL written by statisticians are overcomplicated and filled with tedious theories and proofs. Each is certainly meaningful but may scare away new researchers, especially those with computer science backgrounds. Hence, the aim of this article is to provide a plain-spoken, yet comprehensive, theoretical survey of BNL in terms that researchers in the machine-learning community can understand. It is hoped this survey will serve as a starting point for understanding and exploiting the benefits of BNL in our current scholarly endeavours. To achieve this goal, we have collated the extant studies in this field and aligned them with the steps of a standard BNL procedure-from selecting the appropriate stochastic processes through manipulation to executing the model inference algorithms. At each step, past efforts have been thoroughly summarised and discussed. In addition, we have reviewed the common methods for implementing BNL in various machine-learning tasks along with its diverse applications in the real world as examples to motivate future studies

    Copula models in machine learning

    Get PDF
    The introduction of copulas, which allow separating the dependence structure of a multivariate distribution from its marginal behaviour, was a major advance in dependence modelling. Copulas brought new theoretical insights to the concept of dependence and enabled the construction of a variety of new multivariate distributions. Despite their popularity in statistics and financial modelling, copulas have remained largely unknown in the machine learning community until recently. This thesis investigates the use of copula models, in particular Gaussian copulas, for solving various machine learning problems and makes contributions in the domains of dependence detection between datasets, compression based on side information, and variable selection. Our first contribution is the introduction of a copula mixture model to perform dependency-seeking clustering for co-occurring samples from different data sources. The model takes advantage of the great flexibility offered by the copula framework to extend mixtures of Canonical Correlation Analyzers to multivariate data with arbitrary continuous marginal densities. We formulate our model as a non-parametric Bayesian mixture and provide an efficient Markov Chain Monte Carlo inference algorithm for it. Experiments on real and synthetic data demonstrate that the increased flexibility of the copula mixture significantly improves the quality of the clustering and the interpretability of the results. The second contribution is a reformulation of the information bottleneck (IB) problem in terms of a copula, using the equivalence between mutual information and negative copula entropy. Focusing on the Gaussian copula, we extend the analytical IB solution available for the multivariate Gaussian case to meta-Gaussian distributions which retain a Gaussian dependence structure but allow arbitrary marginal densities. The resulting approach extends the range of applicability of IB to non-Gaussian continuous data and is less sensitive to outliers than the original IB formulation. Our third and final contribution is the development of a novel sparse compression technique based on the information bottleneck (IB) principle, which takes into account side information. We achieve this by introducing a sparse variant of IB that compresses the data by preserving the information in only a few selected input dimensions. By assuming a Gaussian copula we can capture arbitrary non-Gaussian marginals, continuous or discrete. We use our model to select a subset of biomarkers relevant to the evolution of malignant melanoma and show that our sparse selection provides reliable predictors

    Covariate dependent random measures with applications in biostatistics

    Get PDF
    In Bayesian nonparametrics, the specification of suitable (for practical purposes) stochastic processes whose realisations are discrete probability measures plays a crucial role. Recently, real world applications have motivated the extension of these stochastic processes to incorporate covariate information in the realisations with the aim of constructing infinite mixture models having weights and/or component-specific parameters which depend on covariates. This work presents four different modelling strategies motivated by practical problems involving stochastic processes over covariate dependent random measures. After presenting the main concepts in Bayesian nonparametrics and reviewing relevant literature, we develop two Bayesian models which are extensions of augmented response mixture models. In particular, we construct a semi-parametric non-linear regression model for zero-inflated discrete distributions and propose techniques to perform variable selection in cluster-specific regression models. The third contribution presents a generalisation of Dirichlet Process for random probability measures to include covariate information via Beta regression. Properties of this new stochastic process are discussed and two illustrations are presented for dealing with spatially correlated observations and grouped longitudinal data. The last part of the thesis proposes a modelling strategy for time-evolving correlated binary vectors, which relies on latent variables. The distribution of these latent variables is assumed to be a convolution of Gaussian kernels with covariate dependent random probability measures. These four modelling strategies are motivated by datasets that come from medical studies involving lower urinary tract symptoms and acute lymphoblastic leukaemia as well as from publicly available data about primary schools evaluations in London
    • …
    corecore