3,248 research outputs found

    fMRI activation detection with EEG priors

    Get PDF
    The purpose of brain mapping techniques is to advance the understanding of the relationship between structure and function in the human brain in so-called activation studies. In this work, an advanced statistical model for combining functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) recordings is developed to fuse complementary information about the location of neuronal activity. More precisely, a new Bayesian method is proposed for enhancing fMRI activation detection by the use of EEG-based spatial prior information in stimulus based experimental paradigms. I.e., we model and analyse stimulus influence by a spatial Bayesian variable selection scheme, and extend existing high-dimensional regression methods by incorporating prior information on binary selection indicators via a latent probit regression with either a spatially-varying or constant EEG effect. Spatially-varying effects are regularized by intrinsic Markov random field priors. Inference is based on a full Bayesian Markov Chain Monte Carlo (MCMC) approach. Whether the proposed algorithm is able to increase the sensitivity of mere fMRI models is examined in both a real-world application and a simulation study. We observed, that carefully selected EEG--prior information additionally increases sensitivity in activation regions that have been distorted by a low signal-to-noise ratio

    Imperfect predictability and mutual fund dynamics. How managers use predictors in changing systematic risk.

    Get PDF
    Suppose a fund manager uses predictors in changing port-folio allocations over time. How does predictability translate into portfolio decisions? To answer this question we derive a new model within the Bayesian framework, where managers are assumed to modulate the systematic risk in part by observing how the benchmark returns are related to some set of imperfect predictors, and in part on the basis of their own information set. In this portfolio allocation process, managers concern themselves with the potential benefits arising from the market timing generated by benchmark predictors and by private information. In doing this, we impose a structure on fund returns, betas, and bench-mark returns that help to analyse how managers really use predictors in changing investments over time. The main findings of our empirical work are that beta dynamics are significantly affected by economic variables, even though managers do not care about bench-mark sensitivities towards the predictors in choosing their instrument exposure, and that persistence and leverage effects play a key role as well. Conditional market timing is virtually absent, if not negative, over the period 1990-2005. However such anomalous negative timing ability is offset by the leverage effect, which in turn leads to an increase in mutual fund extra performance. JEL Classification: C11, C13, G12, G13Bayesian analysis, conditional asset pricing models, Equity mutual funds, time-varying beta

    Prior distributions for objective Bayesian analysis

    Get PDF
    We provide a review of prior distributions for objective Bayesian analysis. We start by examining some foundational issues and then organize our exposition into priors for: i) estimation or prediction; ii) model selection; iii) highdimensional models. With regard to i), we present some basic notions, and then move to more recent contributions on discrete parameter space, hierarchical models, nonparametric models, and penalizing complexity priors. Point ii) is the focus of this paper: it discusses principles for objective Bayesian model comparison, and singles out some major concepts for building priors, which are subsequently illustrated in some detail for the classic problem of variable selection in normal linear models. We also present some recent contributions in the area of objective priors on model space.With regard to point iii) we only provide a short summary of some default priors for high-dimensional models, a rapidly growing area of research

    Deep learning cardiac motion analysis for human survival prediction

    Get PDF
    Motion analysis is used in computer vision to understand the behaviour of moving objects in sequences of images. Optimising the interpretation of dynamic biological systems requires accurate and precise motion tracking as well as efficient representations of high-dimensional motion trajectories so that these can be used for prediction tasks. Here we use image sequences of the heart, acquired using cardiac magnetic resonance imaging, to create time-resolved three-dimensional segmentations using a fully convolutional network trained on anatomical shape priors. This dense motion model formed the input to a supervised denoising autoencoder (4Dsurvival), which is a hybrid network consisting of an autoencoder that learns a task-specific latent code representation trained on observed outcome data, yielding a latent representation optimised for survival prediction. To handle right-censored survival outcomes, our network used a Cox partial likelihood loss function. In a study of 302 patients the predictive accuracy (quantified by Harrell's C-index) was significantly higher (p < .0001) for our model C=0.73 (95%\% CI: 0.68 - 0.78) than the human benchmark of C=0.59 (95%\% CI: 0.53 - 0.65). This work demonstrates how a complex computer vision task using high-dimensional medical image data can efficiently predict human survival

    Structured penalties for functional linear models---partially empirical eigenvectors for regression

    Get PDF
    One of the challenges with functional data is incorporating spatial structure, or local correlation, into the analysis. This structure is inherent in the output from an increasing number of biomedical technologies, and a functional linear model is often used to estimate the relationship between the predictor functions and scalar responses. Common approaches to the ill-posed problem of estimating a coefficient function typically involve two stages: regularization and estimation. Regularization is usually done via dimension reduction, projecting onto a predefined span of basis functions or a reduced set of eigenvectors (principal components). In contrast, we present a unified approach that directly incorporates spatial structure into the estimation process by exploiting the joint eigenproperties of the predictors and a linear penalty operator. In this sense, the components in the regression are `partially empirical' and the framework is provided by the generalized singular value decomposition (GSVD). The GSVD clarifies the penalized estimation process and informs the choice of penalty by making explicit the joint influence of the penalty and predictors on the bias, variance, and performance of the estimated coefficient function. Laboratory spectroscopy data and simulations are used to illustrate the concepts.Comment: 29 pages, 3 figures, 5 tables; typo/notational errors edited and intro revised per journal review proces
    • 

    corecore