7,616 research outputs found

    Predictive automatic relevance determination by expectation propagation

    Full text link

    Projection predictive model selection for Gaussian processes

    Full text link
    We propose a new method for simplification of Gaussian process (GP) models by projecting the information contained in the full encompassing model and selecting a reduced number of variables based on their predictive relevance. Our results on synthetic and real world datasets show that the proposed method improves the assessment of variable relevance compared to the automatic relevance determination (ARD) via the length-scale parameters. We expect the method to be useful for improving explainability of the models, reducing the future measurement costs and reducing the computation time for making new predictions.Comment: A few minor changes in tex

    Luonnollisiin audiovisuaalisiin ärsykkeisiin liittyvän fMRI-aktivaation bayesilainen luokittelu harvoja ratkaisuja suosivia Laplace-prioreja käyttäen

    Get PDF
    Bayesian linear binary classification models with sparsity promoting Laplace priors were applied to discriminate fMRI patterns related to natural auditory and audiovisual speech and music stimuli. The region of interest comprised the auditory cortex and some surrounding regions related to auditory processing. Truly sparse posterior mean solutions for the classifier weights were obtained by implementing an automatic relevance determination method using expectation propagation (ARDEP). In ARDEP, the Laplace prior was decomposed into a Gaussian scale mixture, and these scales were optimised by maximising their marginal posterior density. ARDEP was also compared to two other methods, which integrated approximately over the original Laplace prior: LAEP approximated the posterior as well by expectation propagation, whereas MCMC used a Markov chain Monte Carlo simulation method implemented by Gibbs sampling. The resulting brain maps were consistent with previous studies for simpler stimuli and suggested that the proposed model is also able to reveal additional information about activation patterns related to natural audiovisual stimuli. The predictive performance of the model was significantly above chance level for all approximate inference methods. Regardless of intensive pruning of features, ARDEP was able to describe all of the most discriminative brain regions obtained by LAEP and MCMC. However, ARDEP lost the more specific shape of the regions by representing them as one or more smaller spots, removing also some relevant features.Bayesilaisia lineaarisia binääriluokittelumalleja ja harvoja ratkaisuja suosivia Laplace- prioreja sovellettiin erottelemaan luonnollisiin auditorisiin ja audiovisuaalisiin puhe- ja musiikkiärsykkeisiin liittyvää fMRI-aktivaatiota kuuloaivokuorella ja sitä ympäröivillä auditoriseen prosessointiin liittyvillä alueilla. Absoluuttisen harvoja posteriorisia odotusarvoratkaisuja luokittimien painoille saatiin expectation propagation -algoritmin avulla toteutetulla automatic relevance determination -menetelmällä (ARDEP). ARDEP-menetelmässä hyödynnettiin Laplace-priorin gaussista skaalahajotelmaa, jonka skaalaparametrit optimoitiin maksimoimalla niiden marginaalinen posterioritiheys. Menetelmää verrattiin myös kahteen muuhun menetelmään, jotka integroivat approksimatiivisesti alkuperäisen Laplace-priorin yli: LAEP approksimoi posteriorijakaumaa niin ikään expectation propagation -algoritmin avulla, kun taas MCMC käytti Gibbs -poiminnalla toteutettua Markovin ketju Monte Carlo -simulaatiomenetelmää. Tuloksena saadut aivokartat olivat linjassa aikaisempien, yksinkertaisemmilla ärsykkeillä saatujen tutkimustulosten kanssa, ja niiden perusteella bayesilaisten luokittelumallien avulla on mahdollista saada myös uudenlaista tietoa siitä, miten luonnollisia audiovisuaalisia ärsykkeitä koodataan aivoissa. Mallien ennustuskyky oli kaikilla approksimaatiomenetelmillä merkittävästi sattumanvaraista tasoa korkeampi. Piirteiden voimakkaasta karsinnasta huolimatta ARDEP pystyi kuvaamaan kaikki huomattavimmat LAEP:n ja MCMC:n erottelemat aivoalueet. ARDEP menetti kuitenkin alueiden tarkemman muodon esittämällä ne yhtenä tai useampana pienempänä alueena, poistaen myös osan merkittävistä piirteistä

    Approximate Inference for Nonstationary Heteroscedastic Gaussian process Regression

    Full text link
    This paper presents a novel approach for approximate integration over the uncertainty of noise and signal variances in Gaussian process (GP) regression. Our efficient and straightforward approach can also be applied to integration over input dependent noise variance (heteroscedasticity) and input dependent signal variance (nonstationarity) by setting independent GP priors for the noise and signal variances. We use expectation propagation (EP) for inference and compare results to Markov chain Monte Carlo in two simulated data sets and three empirical examples. The results show that EP produces comparable results with less computational burden

    Variational Bayesian multinomial probit regression with Gaussian process priors

    Get PDF
    It is well known in the statistics literature that augmenting binary and polychotomous response models with Gaussian latent variables enables exact Bayesian analysis via Gibbs sampling from the parameter posterior. By adopting such a data augmentation strategy, dispensing with priors over regression coefficients in favour of Gaussian Process (GP) priors over functions, and employing variational approximations to the full posterior we obtain efficient computational methods for Gaussian Process classification in the multi-class setting. The model augmentation with additional latent variables ensures full a posteriori class coupling whilst retaining the simple a priori independent GP covariance structure from which sparse approximations, such as multi-class Informative Vector Machines (IVM), emerge in a very natural and straightforward manner. This is the first time that a fully Variational Bayesian treatment for multi-class GP classification has been developed without having to resort to additional explicit approximations to the non-Gaussian likelihood term. Empirical comparisons with exact analysis via MCMC and Laplace approximations illustrate the utility of the variational approximation as a computationally economic alternative to full MCMC and it is shown to be more accurate than the Laplace approximation
    corecore