4 research outputs found

    Age differences in fMRI adaptation for sound identity and location

    Get PDF
    We explored age differences in auditory perception by measuring fMRI adaptation of brain activity to repetitions of sound identity (what) and location (where), using meaningful environmental sounds. In one condition, both sound identity and location were repeated allowing us to assess non-specific adaptation. In other conditions, only one feature was repeated (identity or location) to assess domain-specific adaptation. Both young and older adults showed comparable non-specific adaptation (identity and location) in bilateral temporal lobes, medial parietal cortex, and subcortical regions. However, older adults showed reduced domain-specific adaptation to location repetitions in a distributed set of regions, including frontal and parietal areas, and to identity repetition in anterior temporal cortex. We also re-analyzed data from a previously published 1-back fMRI study, in which participants responded to infrequent repetition of the identity or location of meaningful sounds. This analysis revealed age differences in domain-specific adaptation in a set of brain regions that overlapped substantially with those identified in the adaptation experiment. This converging evidence of reductions in the degree of auditory fMRI adaptation in older adults suggests that the processing of specific auditory “what” and “where” information is altered with age, which may influence cognitive functions that depend on this processing

    Optimizing Preprocessing and Analysis Pipelines for Single-Subject fMRI: 2. Interactions with ICA, PCA, Task Contrast and Inter-Subject Heterogeneity

    Get PDF
    A variety of preprocessing techniques are available to correct subject-dependant artifacts in fMRI, caused by head motion and physiological noise. Although it has been established that the chosen preprocessing steps (or “pipeline”) may significantly affect fMRI results, it is not well understood how preprocessing choices interact with other parts of the fMRI experimental design. In this study, we examine how two experimental factors interact with preprocessing: between-subject heterogeneity, and strength of task contrast. Two levels of cognitive contrast were examined in an fMRI adaptation of the Trail-Making Test, with data from young, healthy adults. The importance of standard preprocessing with motion correction, physiological noise correction, motion parameter regression and temporal detrending were examined for the two task contrasts. We also tested subspace estimation using Principal Component Analysis (PCA), and Independent Component Analysis (ICA). Results were obtained for Penalized Discriminant Analysis, and model performance quantified with reproducibility (R) and prediction metrics (P). Simulation methods were also used to test for potential biases from individual-subject optimization. Our results demonstrate that (1) individual pipeline optimization is not significantly more biased than fixed preprocessing. In addition, (2) when applying a fixed pipeline across all subjects, the task contrast significantly affects pipeline performance; in particular, the effects of PCA and ICA models vary with contrast, and are not by themselves optimal preprocessing steps. Also, (3) selecting the optimal pipeline for each subject improves within-subject (P,R) and between-subject overlap, with the weaker cognitive contrast being more sensitive to pipeline optimization. These results demonstrate that sensitivity of fMRI results is influenced not only by preprocessing choices, but also by interactions with other experimental design factors. This paper outlines a quantitative procedure to denoise data that would otherwise be discarded due to artifact; this is particularly relevant for weak signal contrasts in single-subject, small-sample and clinical datasets

    An evaluation of methods for detecting brain activations from functional neuroimages

    No full text
    Abstract – Brain activation studies based on PET or fMRI seek to explore neuroscience questions by using statistical techniques to analyze the acquired images and produce statistical parametric images (SPIs). An increasingly wide range of univariate and multivariate analysis techniques are used to generate SPIs in which local mean-signal activations and/or long-range spatial interactions may be detected. However, little is known about the comparative detection performance of these techniques in finite data sets, even for relatively simple imaging situations. The principal aim of this study is to empirically compare the detection performance of a range of simple statistical techniques using simulations and receiver operating characteristics (ROC) analysis. Using a generic phantom model based on parameters measured from a real PET study we examined the effect on detection performance of the number of images employed (10-100), the type of data-matrix centering used, and the mean, variance, and covariation of the amplitude of activation signals among spatially separated sites. We compared the performance of pixel-by-pixel image comparisons using single-voxel or pooled variance estimates with methods based on pixel covariances, including covariance and correlation-coefficient thresholding, singular value decomposition (SVD), and SVD followed by a Fisher discriminant method, which is equivalent to Hotelling’s T 2 for our two-state simulations. In addition, th

    Empiricism and the Epistemic Status of Imaging Technologies

    Get PDF
    This starting point for this project was the question of how to understand the epistemic status of mathematized imaging technologies such as positron emission tomography (PET) and confocal microscopy. These sorts of instruments play an increasingly important role in virtually all areas of biology and medicine. Some of these technologies have been widely celebrated as having revolutionized various fields of studies while others have been the target of substantial criticism. Thus, it is essential that we be able to assess these sorts of technologies as methods of producing evidence. They differ from one another in many respects, but one feature they all have in common is the use of multiple layers of statistical and mathematical processing that are essential to data production. This feature alone means that they do not fit neatly into any standard empiricist account of evidence. Yet this failure to be accommodated by philosophical accounts of good evidence does not indicate a general inadequacy on their part since, by many measures, they very often produce very high quality evidence. In order to understand how they can do so, we must look more closely at old philosophical questions concerning the role of experience and observation in acquiring knowledge about the external world. Doing so leads us to a new, grounded version of empiricism.After distinguishing between a weaker and a stronger, anthropomorphic version of empiricism, I argue that most contemporary accounts of observation are what I call benchmark strategies that, implicitly or explicitly, rely on the stronger version according to which human sense experience holds a place of unique privilege. They attempt to extend the bounds of observation - and the epistemic privilege accorded to it - by establishing some type of relevant similarity to the benchmark of human perception. These accounts fail because they are unable to establish an epistemically motivated account of what relevant similarity consists of. The last best chance for any benchmark approach, and, indeed, for anthropomorphic empiricism, is to supplement a benchmark strategy with a grounding strategy. Toward this end, I examine the Grounded Benchmark Criterion which defines relevant similarity to human perception to be defined in terms of the reliability-making features of human perception. This account, too, must fail due to our inability to specify these features given the current state of understanding of the human visual system. However, this failure reveals that it is reliability alone that is epistemically relevant, not any other sort of similarity to human perception.Current accounts of reliability suffer from a number of difficulties, so I develop a novel account of reliability that is based on the concept of granularity. My account of reliability in terms of a granularity match both provides the means to refine the weaker version of empiricism and allows us to establish when and why imaging technologies are reliable. Finally, I use this account of granularity is examining the importance of the fact that the output of imaging technologies usually is images
    corecore