38 research outputs found

    Reduced haemodynamic response in the ageing visual cortex measured by absolute fNIRS

    Get PDF
    The effect of healthy ageing on visual cortical activation is still to be fully explored. This study aimed to elucidate whether the haemodynamic response (HDR) of the visual cortex altered as a result of ageing. Visually normal (healthy) participants were presented with a simple visual stimulus (reversing checkerboard). Full optometric screening was implemented to identify two age groups: younger adults (n = 12, mean age 21) and older adults (n = 13, mean age 71). Frequency-domain Multi-distance (FD-MD) functional Near-Infrared Spectroscopy (fNIRS) was used to measure absolute changes in oxygenated [HbO] and deoxygenated [HbR] haemoglobin concentrations in the occipital cortices. Utilising a slow event-related design, subjects viewed a full field reversing checkerboard with contrast and check size manipulations (15 and 30 minutes of arc, 50% and 100% contrast). Both groups showed the characteristic response of increased [HbO] and decreased [HbR] during stimulus presentation. However, older adults produced a more varied HDR and often had comparable levels of [HbO] and [HbR] during both stimulus presentation and baseline resting state. Younger adults had significantly greater concentrations of both [HbO] and [HbR] in every investigation regardless of the type of stimulus displayed (p<0.05). The average variance associated with this age-related effect for [HbO] was 88% and [HbR] 91%. Passive viewing of a visual stimulus, without any cognitive input, showed a marked age-related decline in the cortical HDR. Moreover, regardless of stimulus parameters such as check size, the HDR was characterised by age. In concurrence with present neuroimaging literature, we conclude that the visual HDR decreases as healthy ageing proceeds

    When the Choice Is Ours: Context and Agency Modulate the Neural Bases of Decision-Making

    Get PDF
    The option to choose between several courses of action is often associated with the feeling of being in control. Yet, in certain situations, one may prefer to decline such agency and instead leave the choice to others. In the present functional magnetic resonance imaging (fMRI) study, we provide evidence that the neural processes involved in decision-making are modulated not only by who controls our choice options (agency), but also by whether we have a say in who is in control (context). The fMRI results are noteworthy in that they reveal specific contributions of the anterior frontomedian cortex (viz. BA 10) and the rostral cingulate zone (RCZ) in decision-making processes. The RCZ is engaged when conditions clearly present us with the most choice options. BA 10 is engaged in particular when the choice is completely ours, as well as when it is completely up to others to choose for us which in turn gives rise to an attribution of control to oneself or someone else, respectively. After all, it does not only matter whether we have any options to choose from, but also who decides on that

    Intermodal attention affects the processing of the temporal alignment of audiovisual stimuli

    Get PDF
    The temporal asynchrony between inputs to different sensory modalities has been shown to be a critical factor influencing the interaction between such inputs. We used scalp-recorded event-related potentials (ERPs) to investigate the effects of attention on the processing of audiovisual multisensory stimuli as the temporal asynchrony between the auditory and visual inputs varied across the audiovisual integration window (i.e., up to 125Β ms). Randomized streams of unisensory auditory stimuli, unisensory visual stimuli, and audiovisual stimuli (consisting of the temporally proximal presentation of the visual and auditory stimulus components) were presented centrally while participants attended to either the auditory or the visual modality to detect occasional target stimuli in that modality. ERPs elicited by each of the contributing sensory modalities were extracted by signal processing techniques from the combined ERP waveforms elicited by the multisensory stimuli. This was done for each of the five different 50-ms subranges of stimulus onset asynchrony (SOA: e.g., V precedes A by 125–75Β ms, by 75–25Β ms, etc.). The extracted ERPs for the visual inputs of the multisensory stimuli were compared among each other and with the ERPs to the unisensory visual control stimuli, separately when attention was directed to the visual or to the auditory modality. The results showed that the attention effects on the right-hemisphere visual P1 was largest when auditory and visual stimuli were temporally aligned. In contrast, the N1 attention effect was smallest at this latency, suggesting that attention may play a role in the processing of the relative temporal alignment of the constituent parts of multisensory stimuli. At longer latencies an occipital selection negativity for the attended versus unattended visual stimuli was also observed, but this effect did not vary as a function of SOA, suggesting that by that latency a stable representation of the auditory and visual stimulus components has been established

    Distribution of Attention Modulates Salience Signals in Early Visual Cortex

    Get PDF
    Previous research has shown that the extent to which people spread attention across the visual field plays a crucial role in visual selection and the occurrence of bottom-up driven attentional capture. Consistent with previous findings, we show that when attention was diffusely distributed across the visual field while searching for a shape singleton, an irrelevant salient color singleton captured attention. However, while using the very same displays and task, no capture was observed when observers initially focused their attention at the center of the display. Using event-related fMRI, we examined the modulation of retinotopic activity related to attentional capture in early visual areas. Because the sensory display characteristics were identical in both conditions, we were able to isolate the brain activity associated with exogenous attentional capture. The results show that spreading of attention leads to increased bottom-up exogenous capture and increased activity in visual area V3 but not in V2 and V1

    Children with Reading Disability Show Brain Differences in Effective Connectivity for Visual, but Not Auditory Word Comprehension

    Get PDF
    Background: Previous literature suggests that those with reading disability (RD) have more pronounced deficits during semantic processing in reading as compared to listening comprehension. This discrepancy has been supported by recent neuroimaging studies showing abnormal activity in RD during semantic processing in the visual but not in the auditory modality. Whether effective connectivity between brain regions in RD could also show this pattern of discrepancy has not been investigated. Methodology/Principal Findings: Children (8- to 14-year-olds) were given a semantic task in the visual and auditory modality that required an association judgment as to whether two sequentially presented words were associated. Effective connectivity was investigated using Dynamic Causal Modeling (DCM) on functional magnetic resonance imaging (fMRI) data. Bayesian Model Selection (BMS) was used separately for each modality to find a winning family of DCM models separately for typically developing (TD) and RD children. BMS yielded the same winning family with modulatory effects on bottom-up connections from the input regions to middle temporal gyrus (MTG) and inferior frontal gyrus(IFG) with inconclusive evidence regarding top-down modulations. Bayesian Model Averaging (BMA) was thus conducted across models in this winning family and compared across groups. The bottom-up effect from the fusiform gyrus (FG) to MTG rather than the top-down effect from IFG to MTG was stronger in TD compared to RD for the visual modality. The stronge

    A Functional and Structural Investigation of the Human Fronto-Basal Volitional Saccade Network

    Get PDF
    Almost all cortical areas are connected to the subcortical basal ganglia (BG) through parallel recurrent inhibitory and excitatory loops, exerting volitional control over automatic behavior. As this model is largely based on non-human primate research, we used high resolution functional MRI and diffusion tensor imaging (DTI) to investigate the functional and structural organization of the human (pre)frontal cortico-basal network controlling eye movements. Participants performed saccades in darkness, pro- and antisaccades and observed stimuli during fixation. We observed several bilateral functional subdivisions along the precentral sulcus around the human frontal eye fields (FEF): a medial and lateral zone activating for saccades in darkness, a more fronto-medial zone preferentially active for ipsilateral antisaccades, and a large anterior strip along the precentral sulcus activating for visual stimulus presentation during fixation. The supplementary eye fields (SEF) were identified along the medial wall containing all aforementioned functions. In the striatum, the BG area receiving almost all cortical input, all saccade related activation was observed in the putamen, previously considered a skeletomotor striatal subdivision. Activation elicited by the cue instructing pro or antisaccade trials was clearest in the medial FEF and right putamen. DTI fiber tracking revealed that the subdivisions of the human FEF complex are mainly connected to the putamen, in agreement with the fMRI findings. The present findings demonstrate that the human FEF has functional subdivisions somewhat comparable to non-human primates. However, the connections to and activation in the human striatum preferentially involve the putamen, not the caudate nucleus as is reported for monkeys. This could imply that fronto-striatal projections for the oculomotor system are fundamentally different between humans and monkeys. Alternatively, there could be a bias in published reports of monkey studies favoring the caudate nucleus over the putamen in the search for oculomotor functions

    Interaction of cortical networks mediating object motion detection by moving observers

    No full text
    The task of parceling perceived visual motion into self- and object motion components is critical to safe and accurate visually guided navigation. In this paper, we used functional magnetic resonance imaging to determine the cortical areas functionally active in this task and the pattern connectivity among them to investigate the cortical regions of interest and networks that allow subjects to detect object motion separately from induced self-motion. Subjects were presented with nine textured objects during simulated forward self-motion and were asked to identify the target object, which had an additional, independent motion component toward or away from the observer. Cortical activation was distributed among occipital, intra-parietal and fronto-parietal areas. We performed a network analysis of connectivity data derived from partial correlation and multivariate Granger causality analyses among functionally active areas. This revealed four coarsely separated network clusters: bilateral V1 and V2; visually responsive occipito-temporal areas, including bilateral LO, V3A, KO (V3B) and hMT; bilateral VIP, DIPSM and right precuneus; and a cluster of higher, primarily left hemispheric regions, including the central sulcus, post-, pre- and sub-central sulci, pre-central gyrus, and FEF. We suggest that the visually responsive networks are involved in forming the representation of the visual stimulus, while the higher, left hemisphere cluster is involved in mediating the interpretation of the stimulus for action. Our main focus was on the relationships of activations during our task among the visually responsive areas. To determine the properties of the mechanism corresponding to the visual processing networks, we compared subjects' psychophysical performance to a model of object motion detection based solely on relative motion among objects and found that it was inconsistent with observer performance. Our results support the use of scene context (e.g., eccentricity, depth) in the detection of object motion. We suggest that the cortical activation and visually responsive networks provide a potential substrate for this computation
    corecore