6,212 research outputs found
A Bayesian Variable Selection Approach Yields Improved Detection of Brain Activation From Complex-Valued fMRI
Voxel functional magnetic resonance imaging (fMRI) time courses are complex-valued signals giving rise to magnitude and phase data. Nevertheless, most studies use only the magnitude signals and thus discard half of the data that could potentially contain important information. Methods that make use of complex-valued fMRI (CV-fMRI) data have been shown to lead to superior power in detecting active voxels when compared to magnitude-only methods, particularly for small signal-to-noise ratios (SNRs). We present a new Bayesian variable selection approach for detecting brain activation at the voxel level from CV-fMRI data. We develop models with complex-valued spike-and-slab priors on the activation parameters that are able to combine the magnitude and phase information. We present a complex-valued EM variable selection algorithm that leads to fast detection at the voxel level in CV-fMRI slices and also consider full posterior inference via Markov chain Monte Carlo (MCMC). Model performance is illustrated through extensive simulation studies, including the analysis of physically based simulated CV-fMRI slices. Finally, we use the complex-valued Bayesian approach to detect active voxels in human CV-fMRI from a healthy individual who performed unilateral finger tapping in a designed experiment. The proposed approach leads to improved detection of activation in the expected motor-related brain regions and produces fewer false positive results than other methods for CV-fMRI. Supplementary materials for this article are available online
A Hierarchical Bayesian Framework for Constructing Sparsity-inducing Priors
Variable selection techniques have become increasingly popular amongst
statisticians due to an increased number of regression and classification
applications involving high-dimensional data where we expect some predictors to
be unimportant. In this context, Bayesian variable selection techniques
involving Markov chain Monte Carlo exploration of the posterior distribution
over models can be prohibitively computationally expensive and so there has
been attention paid to quasi-Bayesian approaches such as maximum a posteriori
(MAP) estimation using priors that induce sparsity in such estimates. We focus
on this latter approach, expanding on the hierarchies proposed to date to
provide a Bayesian interpretation and generalization of state-of-the-art
penalized optimization approaches and providing simultaneously a natural way to
include prior information about parameters within this framework. We give
examples of how to use this hierarchy to compute MAP estimates for linear and
logistic regression as well as sparse precision-matrix estimates in Gaussian
graphical models. In addition, an adaptive group lasso method is derived using
the framework.Comment: Submitted for publication; corrected typo
- …