6 research outputs found

    Structured recognition for generative models with explaining away

    Get PDF
    A key goal of unsupervised learning is to go beyond density estimation and sample generation to reveal the structure inherent within observed data. Such structure can be expressed in the pattern of interactions between explanatory latent variables captured through a probabilistic graphical model. Although the learning of structured graphical models has a long history, much recent work in unsupervised modelling has instead emphasised flexible deep-network-based generation, either transforming independent latent generators to model complex data or assuming that distinct observed variables are derived from different latent nodes. Here, we extend amortised variational inference to incorporate structured factors over multiple variables, able to capture the observation-induced posterior dependence between latents that results from “explaining away” and thus allow complex observations to depend on multiple nodes of a structured graph. We show that appropriately parametrised factors can be combined efficiently with variational message passing in rich graphical structures. We instantiate the framework in nonlinear Gaussian Process Factor Analysis, evaluating the structured recognition framework using synthetic data from known generative processes. We fit the GPFA model to high-dimensional neural spike data from the hippocampus of freely moving rodents, where the model successfully identifies latent signals that correlate with behavioural covariates

    Unsupervised representation learning with recognition-parametrised probabilistic models

    Get PDF
    We introduce a new approach to probabilistic unsupervised learning based on the recognitionparametrised model (RPM): a normalised semiparametric hypothesis class for joint distributions over observed and latent variables. Under the key assumption that observations are conditionally independent given latents, the RPM combines parametric prior and observation-conditioned latent distributions with non-parametric observation marginals. This approach leads to a flexible learnt recognition model capturing latent dependence between observations, without the need for an explicit, parametric generative model. The RPM admits exact maximum-likelihood learning for discrete latents, even for powerful neuralnetwork-based recognition. We develop effective approximations applicable in the continuouslatent case. Experiments demonstrate the effectiveness of the RPM on high-dimensional data, learning image classification from weak indirect supervision; direct image-level latent Dirichlet allocation; and recognition-parametrised Gaussian process factor analysis (RP-GPFA) applied to multi-factorial spatiotemporal datasets. The RPM provides a powerful framework to discover meaningful latent structure underlying observational data, a function critical to both animal and artificial intelligence

    Unsupervised representation learning with recognition-parametrised probabilistic models

    Full text link
    We introduce a new approach to probabilistic unsupervised learning based on the recognition-parametrised model (RPM): a normalised semi-parametric hypothesis class for joint distributions over observed and latent variables. Under the key assumption that observations are conditionally independent given latents, the RPM combines parametric prior and observation-conditioned latent distributions with non-parametric observation marginals. This approach leads to a flexible learnt recognition model capturing latent dependence between observations, without the need for an explicit, parametric generative model. The RPM admits exact maximum-likelihood learning for discrete latents, even for powerful neural-network-based recognition. We develop effective approximations applicable in the continuous-latent case. Experiments demonstrate the effectiveness of the RPM on high-dimensional data, learning image classification from weak indirect supervision; direct image-level latent Dirichlet allocation; and recognition-parametrised Gaussian process factor analysis (RP-GPFA) applied to multi-factorial spatiotemporal datasets. The RPM provides a powerful framework to discover meaningful latent structure underlying observational data, a function critical to both animal and artificial intelligence

    State space methods for phase amplitude coupling analysis

    Get PDF
    Phase amplitude coupling (PAC) is thought to play a fundamental role in the dynamic coordination of brain circuits and systems. There are however growing concerns that existing methods for PAC analysis are prone to error and misinterpretation. Improper frequency band selection can render true PAC undetectable, while non-linearities or abrupt changes in the signal can produce spurious PAC. Current methods require large amounts of data and lack formal statistical inference tools. We describe here a novel approach for PAC analysis that substantially addresses these problems. We use a state space model to estimate the component oscillations, avoiding problems with frequency band selection, nonlinearities, and sharp signal transitions. We represent cross-frequency coupling in parametric and time-varying forms to further improve statistical efficiency and estimate the posterior distribution of the coupling parameters to derive their credible intervals. We demonstrate the method using simulated data, rat local field potentials (LFP) data, and human EEG data.P01GM118269 - NIH HHS; R01AG056015 - NIH HHS; R01AG054081 - NIH HHS; R21DA048323 - NIH HHSPublished versio
    corecore