511 research outputs found

    Statistical Analysis of fMRI Time-Series: A Critical Review of the GLM Approach.

    Get PDF
    Functional magnetic resonance imaging (fMRI) is one of the most widely used tools to study the neural underpinnings of human cognition. Standard analysis of fMRI data relies on a general linear model (GLM) approach to separate stimulus induced signals from noise. Crucially, this approach relies on a number of assumptions about the data which, for inferences to be valid, must be met. The current paper reviews the GLM approach to analysis of fMRI time-series, focusing in particular on the degree to which such data abides by the assumptions of the GLM framework, and on the methods that have been developed to correct for any violation of those assumptions. Rather than biasing estimates of effect size, the major consequence of non-conformity to the assumptions is to introduce bias into estimates of the variance, thus affecting test statistics, power, and false positive rates. Furthermore, this bias can have pervasive effects on both individual subject and group-level statistics, potentially yielding qualitatively different results across replications, especially after the thresholding procedures commonly used for inference-making

    Examining the performance of trend surface models for inference on Functional Magnetic Resonance Imaging (fMRI) data

    Get PDF
    The current predominant approach to neuroimaging data analysis is to use voxels as units of computation in a mass univariate approach which does not appropriately account for the existing spatial correlation and is plagued by problems of multiple comparisons. Therefore, there is a need to explore alternative approaches for inference on neuroimaging data that accurately model spatial autocorrelation, potentially providing better type I error control and more sensitive inference. In this project we examine the performance of a trend surface modeling (TSM) approach that is based on a biologically relevant parcellation of the brain. We present our results from applying the TSM to both task fMRI and resting-state fMRI and compare the latter to the results from the parametric software, FSL. We demonstrate that the TSM provides better Type I error control, as well as sensitive inference on task data.The current predominant approach to neuroimaging data analysis is to use voxels as units of computation in a mass univariate approach which does not appropriately account for the existing spatial correlation and is plagued by problems of multiple comparisons. Therefore, there is a need to explore alternative approaches for inference on neuroimaging data that accurately model spatial autocorrelation, potentially providing better type I error control and more sensitive inference. In this project we examine the performance of a trend surface modeling (TSM) approach that is based on a biologically relevant parcellation of the brain. We present our results from applying the TSM to both task fMRI and resting-state fMRI and compare the latter to the results from the parametric software, FSL. We demonstrate that the TSM provides better Type I error control, as well as sensitive inference on task data

    The empirical replicability of task-based fMRI as a function of sample size

    Get PDF
    Replicating results (i.e. obtaining consistent results using a new independent dataset) is an essential part of good science. As replicability has consequences for theories derived from empirical studies, it is of utmost importance to better understand the underlying mechanisms influencing it. A popular tool for non-invasive neuroimaging studies is functional magnetic resonance imaging (fMRI). While the effect of underpowered studies is well documented, the empirical assessment of the interplay between sample size and replicability of results for task-based fMRI studies remains limited. In this work, we extend existing work on this assessment in two ways. Firstly, we use a large database of 1400 subjects performing four types of tasks from the IMAGEN project to subsample a series of independent samples of increasing size. Secondly, replicability is evaluated using a multi-dimensional framework consisting of 3 different measures: (un)conditional test-retest reliability, coherence and stability. We demonstrate not only a positive effect of sample size, but also a trade-off between spatial resolution and replicability. When replicability is assessed voxelwise or when observing small areas of activation, a larger sample size than typically used in fMRI is required to replicate results. On the other hand, when focussing on clusters of voxels, we observe a higher replicability. In addition, we observe variability in the size of clusters of activation between experimental paradigms or contrasts of parameter estimates within these

    An Iterative Jackknife Approach for Assessing Reliability and Power of fMRI Group Analyses

    Get PDF
    For functional magnetic resonance imaging (fMRI) group activation maps, so-called second-level random effect approaches are commonly used, which are intended to be generalizable to the population as a whole. However, reliability of a certain activation focus as a function of group composition or group size cannot directly be deduced from such maps. This question is of particular relevance when examining smaller groups (<20–27 subjects). The approach presented here tries to address this issue by iteratively excluding each subject from a group study and presenting the overlap of the resulting (reduced) second-level maps in a group percent overlap map. This allows to judge where activation is reliable even upon excluding one, two, or three (or more) subjects, thereby also demonstrating the inherent variability that is still present in second-level analyses. Moreover, when progressively decreasing group size, foci of activation will become smaller and/or disappear; hence, the group size at which a given activation disappears can be considered to reflect the power necessary to detect this particular activation. Systematically exploiting this effect allows to rank clusters according to their observable effect size. The approach is tested using different scenarios from a recent fMRI study (children performing a “dual-use” fMRI task, n = 39), and the implications of this approach are discussed

    Low-level carbon monoxide exposure affects BOLD fMRI response.

    Get PDF
    Blood oxygen level dependent (BOLD) fMRI is a common technique for measuring brain activation that could be affected by low-level carbon monoxide (CO) exposure from, e.g. smoking. This study aimed to probe the vulnerability of BOLD fMRI to CO and determine whether it may constitute a significant neuroimaging confound. Low-level (6 ppm exhaled) CO effects on BOLD response were assessed in 12 healthy never-smokers on two separate experimental days (CO and air control). fMRI tasks were breath-holds (hypercapnia), visual stimulation and fingertapping. BOLD fMRI response was lower during breath holds, visual stimulation and fingertapping in the CO protocol compared to the air control protocol. Behavioural and physiological measures remained unchanged. We conclude that BOLD fMRI might be vulnerable to changes in baseline CO, and suggest exercising caution when imaging populations exposed to elevated CO levels. Further work is required to fully elucidate the impact on CO on fMRI and its underlying mechanisms

    Introducing alternative-based thresholding for defining functional regions of interest in fMRI

    Get PDF
    In fMRI research, one often aims to examine activation in specific functional regions of interest (fROIs). Current statistical methods tend to localize fROIs inconsistently, focusing on avoiding detection of false activation. Not missing true activation is however equally important in this context. In this study, we explored the potential of an alternative-based thresholding (ABT) procedure, where evidence against the null hypothesis of no effect and evidence against a prespecified alternative hypothesis is measured to control both false positives and false negatives directly. The procedure was validated in the context of localizer tasks on simulated brain images and using a real data set of 100 runs per subject. Voxels categorized as active with ABT can be confidently included in the definition of the fROI, while inactive voxels can be confidently excluded. Additionally, the ABT method complements classic null hypothesis significance testing with valuable information by making a distinction between voxels that show evidence against both the null and alternative and voxels for which the alternative hypothesis cannot be rejected despite lack of evidence against the null

    Some contributions to Ricean and complex-valued modeling of functional MRI time series

    Get PDF
    Although it is well-known that data from functional magnetic resonance imaging (fMRI) experiments are complex-valued as a result of Fourier reconstruction, the vast majority of statistical analyses focus only on the magnitudes of these complex-valued measurements and discard the phase information. Moreover, most magnitude-only analyses rely on a Gaussian-approximation to the Ricean-distributed magnitudes, which is not (even approximately) valid at low signal-to-noise ratios (SNRs). As a result, we advocate use of the entire complex-valued data in statistical modeling and extend the complex-valued-data model in Rowe and Logan (2004) by applying AR(p) dependence to the real and imaginary errors. Based on this complex-valued model, we develop a likelihood-ratio test (LRT) for detecting activated brain voxels (or volume elements) which outperforms an LRT based on a Gaussian-assumed AR(p) magnitude-only model for simulated and experimental data. For existing fMRI datasets with unrecoverable phase information, we advocate Ricean modeling of the magnitude data; to this end, we compare the performance of activation tests based on Ricean and Gaussian magnitude-only models. In addition, we develop tests based on an AR(p) Ricean model that augments the observed magnitude data with missing phase data in an EM algorithm framework. Somewhat surprisingly, the Ricean-based activation tests perform similarly to their Gaussian-based counterparts, even at low SNRs, which further supports the use of complex-valued data

    The physiological bases of hidden noise-induced hearing loss: protocol for a functional neuroimaging study

    Get PDF
    Background: Rodent studies indicate that noise exposure can cause permanent damage to synapses between inner hair cells and high-threshold auditory nerve fibers, without permanently altering threshold sensitivity. These demonstrations of what is commonly known as “hidden hearing loss” have been confirmed in several rodent species, but the implications for human hearing are unclear. Objective: Our Medical Research Council (MRC) funded programme aims to address this unanswered question, by investigating functional consequences of the damage to the human peripheral and central auditory nervous system that results from cumulative lifetime noise exposure. Behavioral and neuroimaging techniques are being used in a series of parallel studies aimed at detecting hidden hearing loss in humans. The planned neuroimaging study aims to (1) identify central auditory biomarkers associated with hidden hearing loss, (2) investigate if there are any additive contributions from tinnitus or diminished sound tolerance, which are often comorbid with hearing problems, and (3) explore the relation between subcortical functional Magnetic Resonance Imaging (fMRI) measures and the auditory brainstem response (ABR). Methods: Individuals aged 25 to 40 years with pure tone hearing thresholds ≤ 20 dB HL over the range 500 Hz to 8 kHz and no contraindications for MRI or signs of ear disease will be recruited into the study. Lifetime noise exposure will be estimated using an in-depth structured interview. Auditory responses throughout the central auditory system will be recorded using ABR and fMRI. Analyses will focus predominantly on correlations between lifetime noise exposure and auditory response characteristics. Results: This article reports the study protocol. The programme grant was awarded in July 2013. Enrollment for the study described in this protocol commenced in February 2017 and was completed in December 2017. Results are expected in 2018. Conclusions: This challenging and comprehensive study will have the potential to impact diagnostic procedures for hidden hearing loss, enabling early identification of noise-induced auditory damage via the detection of changes in central auditory processing. Consequently, this will generate the opportunity to give personalized advice regarding provision of ear defense and monitoring of further damage, thus reducing the incidence of noise-induced hearing loss
    corecore