17,415 research outputs found

    Gaussian Process Structural Equation Models with Latent Variables

    Full text link
    In a variety of disciplines such as social sciences, psychology, medicine and economics, the recorded data are considered to be noisy measurements of latent variables connected by some causal structure. This corresponds to a family of graphical models known as the structural equation model with latent variables. While linear non-Gaussian variants have been well-studied, inference in nonparametric structural equation models is still underdeveloped. We introduce a sparse Gaussian process parameterization that defines a non-linear structure connecting latent variables, unlike common formulations of Gaussian process latent variable models. The sparse parameterization is given a full Bayesian treatment without compromising Markov chain Monte Carlo efficiency. We compare the stability of the sampling procedure and the predictive ability of the model against the current practice.Comment: 12 pages, 6 figure

    Sparse inverse covariance estimation in Gaussian graphical models

    Get PDF
    One of the fundamental tasks in science is to find explainable relationships between observed phenomena. Recent work has addressed this problem by attempting to learn the structure of graphical models - especially Gaussian models - by the imposition of sparsity constraints. The graphical lasso is a popular method for learning the structure of a Gaussian model. It uses regularisation to impose sparsity. In real-world problems, there may be latent variables that confound the relationships between the observed variables. Ignoring these latents, and imposing sparsity in the space of the visibles, may lead to the pruning of important structural relationships. We address this problem by introducing an expectation maximisation (EM) method for learning a Gaussian model that is sparse in the joint space of visible and latent variables. By extending this to a conditional mixture, we introduce multiple structures, and allow side information to be used to predict which structure is most appropriate for each data point. Finally, we handle non-Gaussian data by extending each sparse latent Gaussian to a Gaussian copula. We train these models on a financial data set; we find the structures to be interpretable, and the new models to perform better than their existing competitors. A potential problem with the mixture model is that it does not require the structure to persist in time, whereas this may be expected in practice. So we construct an input-output HMM with sparse Gaussian emissions. But the main result is that, provided the side information is rich enough, the temporal component of the model provides little benefit, and reduces efficiency considerably. The GWishart distribution may be used as the basis for a Bayesian approach to learning a sparse Gaussian. However, sampling from this distribution often limits the efficiency of inference in these models. We make a small change to the state-of-the-art block Gibbs sampler to improve its efficiency. We then introduce a Hamiltonian Monte Carlo sampler that is much more efficient than block Gibbs, especially in high dimensions. We use these samplers to compare a Bayesian approach to learning a sparse Gaussian with the (non-Bayesian) graphical lasso. We find that, even when limited to the same time budget, the Bayesian method can perform better. In summary, this thesis introduces practically useful advances in structure learning for Gaussian graphical models and their extensions. The contributions include the addition of latent variables, a non-Gaussian extension, (temporal) conditional mixtures, and methods for efficient inference in a Bayesian formulation
    • …
    corecore