148 research outputs found
Inferring Functional Brain States Using Temporal Evolution of Regularized Classifiers
We present a framework for inferring functional brain state from electrophysiological (MEG or EEG) brain
signals. Our approach is adapted to the needs of functional brain imaging rather than EEG-based brain-computer interface (BCI). This choice leads to a different set of requirements, in particular to the demand for more robust inference methods and more sophisticated model validation techniques. We approach the problem from a machine learning perspective, by constructing a classifier from a set of labeled signal examples. We propose a framework that focuses on temporal evolution of regularized classifiers, with cross-validation for optimal regularization parameter at each time frame. We demonstrate the inference obtained by this method on MEG data recorded from 10 subjects in a simple visual classification experiment, and provide comparison to the classical nonregularized approach
Persistency of Priors-Induced Bias in Decision Behavior and the fMRI Signal
It is well known that people take advantage of prior knowledge to bias decisions. To investigate this phenomenon behaviorally and in the brain, we acquired fMRI data while human subjects viewed ambiguous abstract shapes and decided whether a shape was of Category A (smoother) or B (bumpier). The decision was made in the context of one of two prior knowledge cues, 80/20 and 50/50. The 80/20 cue indicated that upcoming shapes had an 80% probability of being of one category, e.g., B, and a 20% probability of being of the other. The 50/50 cue indicated that upcoming shapes had an equal probability of being of either category. The ideal observer would bias decisions in favor of the indicated alternative at 80/20 and show zero bias at 50/50. We found that subjects did bias their decisions in the predicted direction at 80/20 but did not show zero bias at 50/50. Instead, at 50/50 the subjects retained biases of the same sign as their 80/20 biases, though of diminished magnitude. The signature of a persistent though diminished bias at 50/50 was also evident in fMRI data from frontal and parietal regions previously implicated in decision-making. As a control, we acquired fMRI data from naïve subjects who experienced only the 50/50 stimulus distributions during both the pre-scan training and the fMRI experiment. The behavioral and fMRI data from the naïve subjects reflected decision biases closer to those of the ideal observer than those of the prior knowledge subjects at 50/50. The results indicate that practice making decisions in the context of non-equal prior probabilities biases decisions made later when prior probabilities are equal. This finding may be related to the “anchoring and adjustment” strategy described in the psychology, economics, and marketing literatures, in which subjects adjust a first approximation response – the “anchor” – based on additional information, typically applying insufficient adjustment relative to the ideal observer
Mechanisms of visual attention in the human cortex.
Abstract A typical scene contains many different objects that, because of the limited processing capacity of the visual system, compete for neural representation. The competition among multiple objects in visual cortex can be biased by both bottom-up sensory-driven mechanisms and top-down influences, such as selective attention. Functional brain imaging studies reveal that, both in the absence and in the presence of visual stimulation, biasing signals due to selective attention can modulate neural activity in visual cortex in several ways. Although the competition among stimuli for representation is ultimately resolved within visual cortex, the source of top-down biasing signals derives from a network of areas in frontal and parietal cortex
Neuronal Correlates of the Set-Size Effect in Monkey Lateral Intraparietal Area
It has long been known that the brain is limited in the amount of sensory information that it can process at any given time. A well-known form of capacity limitation in vision is the set-size effect, whereby the time needed to find a target increases in the presence of distractors. The set-size effect implies that inputs from multiple objects interfere with each other, but the loci and mechanisms of this interference are unknown. Here we show that the set-size effect has a neural correlate in competitive visuo-visual interactions in the lateral intraparietal area, an area related to spatial attention and eye movements. Monkeys performed a covert visual search task in which they discriminated the orientation of a visual target surrounded by distractors. Neurons encoded target location, but responses associated with both target and distractors declined as a function of distractor number (set size). Firing rates associated with the target in the receptive field correlated with reaction time both within and across set sizes. The findings suggest that competitive visuo-visual interactions in areas related to spatial attention contribute to capacity limitations in visual searches
A functional dissociation of face-, body- and scene-selective brain areas based on their response to moving and static stimuli
the human brain contains areas that respond selectively to faces, bodies and scenes. Neuroimaging studies have shown that a subset of these areas preferentially respond more to moving than static stimuli, but the reasons for this functional dissociation remain unclear. In the present study, we simultaneously mapped the responses to motion in face-, body- and scene-selective areas in the right hemisphere using moving and static stimuli. participants (N = 22) were scanned using functional magnetic resonance imaging (fMRI) while viewing videos containing bodies, faces, objects, scenes or scrambled objects, and static pictures from the beginning, middle and end of each video. Results demonstrated that lateral areas, including face-selective areas in the posterior and anterior superior temporal sulcus (STS), the extrastriate body area (EBA) and the occipital place area (OPA) responded more to moving than static stimuli. By contrast, there was no difference between the response to moving and static stimuli in ventral and medial category-selective areas, including the fusiform face area (FFA), occipital face area (OFA), amygdala, fusiform body area (FBA), retrosplenial complex (RSC) and parahippocampal place area (PPA). This functional dissociation between lateral and ventral/medial brain areas that respond selectively to different visual categories suggests that face-, body- and scene- selective networks may be functionally organized along a common dimension
The Human Posterior Superior Temporal Sulcus Samples Visual Space Differently From Other Face-Selective Regions
Neuroimaging studies show that ventral face-selective regions, including the fusiform face area (FFA) and occipital face area (OFA), preferentially respond to faces presented in the contralateral visual field (VF). In the current study we measured the VF response of the face-selective posterior superior temporal sulcus (pSTS). Across 3 functional magnetic resonance imaging experiments, participants viewed face videos presented in different parts of the VF. Consistent with prior results, we observed a contralateral VF bias in bilateral FFA, right OFA (rOFA), and bilateral human motion-selective area MT+. Intriguingly, this contralateral VF bias was absent in the bilateral pSTS. We then delivered transcranial magnetic stimulation (TMS) over right pSTS (rpSTS) and rOFA, while participants matched facial expressions in both hemifields. TMS delivered over the rpSTS disrupted performance in both hemifields, but TMS delivered over the rOFA disrupted performance in the contralateral hemifield only. These converging results demonstrate that the contralateral bias for faces observed in ventral face-selective areas is absent in the pSTS. This difference in VF response is consistent with face processing models proposing 2 functionally distinct pathways. It further suggests that these models should account for differences in interhemispheric connections between the face-selective areas across these 2 pathways
Complementary Roles of Systems Representing Sensory Evidence and Systems Detecting Task Difficulty During Perceptual Decision Making
Perceptual decision making is a multi-stage process where incoming sensory information is used to select one option from several alternatives. Researchers typically have adopted one of two conceptual frameworks to define the criteria for determining whether a brain region is involved in decision computations. One framework, building on single-unit recordings in monkeys, posits that activity in a region involved in decision making reflects the accumulation of evidence toward a decision threshold, thus showing the lowest level of BOLD signal during the hardest decisions. The other framework instead posits that activity in a decision-making region reflects the difficulty of a decision, thus showing the highest level of BOLD signal during the hardest decisions. We had subjects perform a face detection task on degraded face images while we simultaneously recorded BOLD activity. We searched for brain regions where changes in BOLD activity during this task supported either of these frameworks by calculating the correlation of BOLD activity with reaction time – a measure of task difficulty. We found that the right supplementary eye field, right frontal eye field, and right inferior frontal gyrus had increased activity relative to baseline that positively correlated with reaction time, while the left superior frontal sulcus and left middle temporal gyrus had decreased activity relative to baseline that negatively correlated with reaction time. We propose that a simple mechanism that scales a region's activity based on task demands can explain our results
The Superior Temporal Sulcus Is Causally Connected to the Amygdala : A Combined TBS-fMRI Study.
Nonhuman primate neuroanatomical studies have identified a cortical pathway from the superior temporal sulcus (STS) projecting into dorsal subregions of the amygdala, but whether this same pathway exists in humans is unknown. Here, we addressed this question by combining theta burst transcranial magnetic stimulation (TBS) with fMRI to test the prediction that the STS and amygdala are functionally connected during face perception. Human participants (N = 17) were scanned, over two sessions, while viewing 3 s video clips of moving faces, bodies, and objects. During these sessions, TBS was delivered over the face-selective right posterior STS (rpSTS) or over the vertex control site. A region-of-interest analysis revealed results consistent with our hypothesis. Namely, TBS delivered over the rpSTS reduced the neural response to faces (but not to bodies or objects) in the rpSTS, right anterior STS (raSTS), and right amygdala, compared with TBS delivered over the vertex. By contrast, TBS delivered over the rpSTS did not significantly reduce the neural response to faces in the right fusiform face area or right occipital face area. This pattern of results is consistent with the existence of a cortico-amygdala pathway in humans for processing face information projecting from the rpSTS, via the raSTS, into the amygdala. This conclusion is consistent with nonhuman primate neuroanatomy and with existing face perception models
Measuring the response to visually presented faces in the human lateral prefrontal cortex
Neuroimaging studies identify multiple face-selective areas in the human brain. In the current study we compared the functional response of the face area in the lateral prefrontal cortex to that of other face-selective areas. In Experiment 1 participants (N=32) were scanned viewing videos containing faces, bodies, scenes, objects, and scrambled objects. We identified a face-selective area in the right inferior frontal gyrus (rIFG). In Experiment 2 participants (N=24) viewed the same videos or static images. Results showed that the rIFG, right posterior superior temporal sulcus (rpSTS) and right occipital face area (rOFA) exhibited a greater response to moving than static faces. In Experiment 3 participants (N=18) viewed face videos in the contralateral and ipsilateral visual fields. Results showed that the rIFG and rpSTS showed no visual field bias, while the rOFA and right fusiform face area (rFFA) showed a contralateral bias. These experiments suggest two conclusions; firstly, in all three experiments the face area in the IFG was not as reliably identified as face areas in the occipitotemporal cortex. Secondly, the similarity of the response profiles in the IFG and pSTS suggests the areas may perform similar cognitive functions, a conclusion consistent with prior neuroanatomical and functional connectivity evidence
- …