8,969 research outputs found

    Selective Attention and Sensory Modality in Aging: Curses and Blessings

    Get PDF
    The notion that selective attention is compromised in older adults as a result of impaired inhibitory control is well established. Yet it is primarily based on empirical findings covering the visual modality. Auditory and, especially, cross-modal selective attention are remarkably underexposed in the literature on aging. In the past five years, we have attempted to fill these voids by investigating performance of younger and older adults on equivalent tasks covering all four combinations of visual or auditory target, and visual or auditory distractor information. In doing so, we have demonstrated that older adults are especially impaired in auditory selective attention with visual distraction. This pattern of results was not mirrored by the results from our psychophysiological studies, however, in which both enhancement of target processing and suppression of distractor processing appeared to be age equivalent. We currently conclude that (1) age-related differences of selective attention are modality dependent, (2) age-related differences of selective attention are limited, and (3) it remains an open question whether modality-specific age differences in selective attention are due to impaired distractor inhibition, impaired target enhancement, or both. These conclusions put the longstanding inhibitory deficit hypothesis of aging in a new perspective

    Structural Integrity of Attention Networks in Cross-Modal Selective Attention Performance in Healthy Aging

    Get PDF
    The influence of structural brain changes in healthy aging on cross-modal selective attention performance was investigated with structural MRI (T1- and diffusion-weighted scans). Eighteen younger (M=26.1, SD=5.7) and 18 older (M=62.4, SD=4.9) healthy adults with normal hearing performed a reaction time (RT) cross-modal selective attention A/B/X task. Participants discriminated syllables presented in either visual or auditory modalities, with either randomized or fixed distraction presented simultaneously in the opposite modality. Within the older group only, RT was significantly slower during random (M=573.24, SE=33.66) compared to fixed (M=554.04, SE=33.53) distraction, F(1,34)=5.41, p=.026. Average gray matter thickness and white matter integrity were lower for older adults, all p\u3c.05. Across the age range, lower average gray matter thickness in regions of the ventral (VAN), but not dorsal (DAN), attention network correlated with larger increases in RT related to distraction, all p\u3c.05. Multiple regression revealed that white matter integrity did not predict RT distraction index (random-fixed), all p\u3e.05. However, post-hoc adaptive lasso regressions demonstrated that FA of bilateral SLF predicted RT distraction index, Wald 2=3.88, p=.016. The present results indicate that structural integrity underlying both DAN and VAN may aid in cross-modal selective attention performance, suggesting that communication between the networks, likely via top-down modulation of bottom-up processes, may be crucial for optimal attention regulation

    Top-down and bottom-up modulation in processing bimodal face/voice stimuli

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Processing of multimodal information is a critical capacity of the human brain, with classic studies showing bimodal stimulation either facilitating or interfering in perceptual processing. Comparing activity to congruent and incongruent bimodal stimuli can reveal sensory dominance in particular cognitive tasks.</p> <p>Results</p> <p>We investigated audiovisual interactions driven by stimulus properties (bottom-up influences) or by task (top-down influences) on congruent and incongruent simultaneously presented faces and voices while ERPs were recorded. Subjects performed gender categorisation, directing attention either to faces or to voices and also judged whether the face/voice stimuli were congruent in terms of gender. Behaviourally, the unattended modality affected processing in the attended modality: the disruption was greater for attended voices. ERPs revealed top-down modulations of early brain processing (30-100 ms) over unisensory cortices. No effects were found on N170 or VPP, but from 180-230 ms larger right frontal activity was seen for incongruent than congruent stimuli.</p> <p>Conclusions</p> <p>Our data demonstrates that in a gender categorisation task the processing of faces dominate over the processing of voices. Brain activity showed different modulation by top-down and bottom-up information. Top-down influences modulated early brain activity whereas bottom-up interactions occurred relatively late.</p

    A Visionary Approach to Listening: Determining The Role Of Vision In Auditory Scene Analysis

    Get PDF
    To recognize and understand the auditory environment, the listener must first separate sounds that arise from different sources and capture each event. This process is known as auditory scene analysis. The aim of this thesis is to investigate whether and how visual information can influence auditory scene analysis. The thesis consists of four chapters. Firstly, I reviewed the literature to give a clear framework about the impact of visual information on the analysis of complex acoustic environments. In chapter II, I examined psychophysically whether temporal coherence between auditory and visual stimuli was sufficient to promote auditory stream segregation in a mixture. I have found that listeners were better able to report brief deviants in an amplitude modulated target stream when a visual stimulus changed in size in a temporally coherent manner than when the visual stream was coherent with the non-target auditory stream. This work demonstrates that temporal coherence between auditory and visual features can influence the way people analyse an auditory scene. In chapter III, the integration of auditory and visual features in auditory cortex was examined by recording neuronal responses in awake and anaesthetised ferret auditory cortex in response to the modified stimuli used in Chapter II. I demonstrated that temporal coherence between auditory and visual stimuli enhances the neural representation of a sound and influences which sound a neuron represents in a sound mixture. Visual stimuli elicited reliable changes in the phase of the local field potential which provides mechanistic insight into this finding. Together these findings provide evidence that early cross modal integration underlies the behavioural effects in chapter II. Finally, in chapter IV, I investigated whether training can influence the ability of listeners to utilize visual cues for auditory stream analysis and showed that this ability improved by training listeners to detect auditory-visual temporal coherence

    Auditory Selective Attention to Speech Modulates Activity in the Visual Word Form Area

    Get PDF
    Selective attention to speech versus nonspeech signals in complex auditory input could produce top-down modulation of cortical regions previously linked to perception of spoken, and even visual, words. To isolate such top-down attentional effects, we contrasted 2 equally challenging active listening tasks, performed on the same complex auditory stimuli (words overlaid with a series of 3 tones). Instructions required selectively attending to either the speech signals (in service of rhyme judgment) or the melodic signals (tone-triplet matching). Selective attention to speech, relative to attention to melody, was associated with blood oxygenation level-dependent (BOLD) increases during functional magnetic resonance imaging (fMRI) in left inferior frontal gyrus, temporal regions, and the visual word form area (VWFA). Further investigation of the activity in visual regions revealed overall deactivation relative to baseline rest for both attention conditions. Topographic analysis demonstrated that while attending to melody drove deactivation equivalently across all fusiform regions of interest examined, attending to speech produced a regionally specific modulation: deactivation of all fusiform regions, except the VWFA. Results indicate that selective attention to speech can topographically tune extrastriate cortex, leading to increased activity in VWFA relative to surrounding regions, in line with the well-established connectivity between areas related to spoken and visual word perception in skilled reader

    Neurosystems: brain rhythms and cognitive processing

    Get PDF
    Neuronal rhythms are ubiquitous features of brain dynamics, and are highly correlated with cognitive processing. However, the relationship between the physiological mechanisms producing these rhythms and the functions associated with the rhythms remains mysterious. This article investigates the contributions of rhythms to basic cognitive computations (such as filtering signals by coherence and/or frequency) and to major cognitive functions (such as attention and multi-modal coordination). We offer support to the premise that the physiology underlying brain rhythms plays an essential role in how these rhythms facilitate some cognitive operations.098352 - Wellcome Trust; 5R01NS067199 - NINDS NIH HH

    Auditory Selective Attention to Speech Modulates Activity in the Visual Word Form Area

    Get PDF
    Selective attention to speech versus nonspeech signals in complex auditory input could produce top-down modulation of cortical regions previously linked to perception of spoken, and even visual, words. To isolate such top-down attentional effects, we contrasted 2 equally challenging active listening tasks, performed on the same complex auditory stimuli (words overlaid with a series of 3 tones). Instructions required selectively attending to either the speech signals (in service of rhyme judgment) or the melodic signals (tone-triplet matching). Selective attention to speech, relative to attention to melody, was associated with blood oxygenation level–dependent (BOLD) increases during functional magnetic resonance imaging (fMRI) in left inferior frontal gyrus, temporal regions, and the visual word form area (VWFA). Further investigation of the activity in visual regions revealed overall deactivation relative to baseline rest for both attention conditions. Topographic analysis demonstrated that while attending to melody drove deactivation equivalently across all fusiform regions of interest examined, attending to speech produced a regionally specific modulation: deactivation of all fusiform regions, except the VWFA. Results indicate that selective attention to speech can topographically tune extrastriate cortex, leading to increased activity in VWFA relative to surrounding regions, in line with the well-established connectivity between areas related to spoken and visual word perception in skilled readers

    A dual role for prediction error in associative learning

    Get PDF
    Confronted with a rich sensory environment, the brain must learn statistical regularities across sensory domains to construct causal models of the world. Here, we used functional magnetic resonance imaging and dynamic causal modeling (DCM) to furnish neurophysiological evidence that statistical associations are learnt, even when task-irrelevant. Subjects performed an audio-visual target-detection task while being exposed to distractor stimuli. Unknown to them, auditory distractors predicted the presence or absence of subsequent visual distractors. We modeled incidental learning of these associations using a Rescorla--Wagner (RW) model. Activity in primary visual cortex and putamen reflected learning-dependent surprise: these areas responded progressively more to unpredicted, and progressively less to predicted visual stimuli. Critically, this prediction-error response was observed even when the absence of a visual stimulus was surprising. We investigated the underlying mechanism by embedding the RW model into a DCM to show that auditory to visual connectivity changed significantly over time as a function of prediction error. Thus, consistent with predictive coding models of perception, associative learning is mediated by prediction-error dependent changes in connectivity. These results posit a dual role for prediction-error in encoding surprise and driving associative plasticity

    Cross-modal interference-control is reduced in childhood but maintained in aging: a cohort study of stimulus-and response-interference in cross-modal and unimodal Stroop tasks

    Get PDF
    Interference-control is the ability to exclude distractions and focus on a specific task or stimulus. However, it is currently unclear whether the same interference-control mechanisms underlie the ability to ignore unimodal and cross-modal distractions. In two experiments we assessed whether unimodal and cross-modal interference follow similar trajectories in development and aging and occur at similar processing levels. In Experiment 1, 42 children(6-11 years), 31 younger adults (18-25 years) and 32 older adults (60-84 years) identified colour rectangles with either written (unimodal) or spoken (cross-modal) distractor-words. Stimuli could be congruent, incongruent but mapped to the same response (stimulus-incongruent), or incongruent and mapped to different responses (response-incongruent), thus separating interference occurring at early (sensory) and late (response) processing levels. Unimodal interference was worst in childhood and old age; however, older adults maintained the ability to ignore cross-modal distraction. Unimodal but not cross-modal response interference also reduced accuracy. In Experiment 2 we compared the effect of audition on vision and vice versa in 52 children (6-11 years), 30 young adults (22-33 years) and 30 older adults (60-84 years). As in Experiment 1, older adults maintained the ability to ignore cross-modal distraction arising from either modality and neither type of cross-modal distraction limited accuracy in adults. However cross-modal distraction still reduced accuracy in children and children were more slowed by stimulus-interference compared with adults. We conclude that; unimodal and cross-modal interference follow different lifespan trajectories and differences in stimulus- and response-interference may increase cross-modal distractibility in childhood

    Training enhances the ability of listeners to exploit visual information for auditory scene analysis

    Get PDF
    The ability to use temporal relationships between cross-modal cues facilitates perception and behavior. Previously we observed that temporally correlated changes in the size of a visual stimulus and the intensity in an auditory stimulus influenced the ability of listeners to perform an auditory selective attention task (Maddox, Atilgan, Bizley, & Lee, 2015). Participants detected timbral changes in a target sound while ignoring those in a simultaneously presented masker. When the visual stimulus was temporally coherent with the target sound, performance was significantly better than when the visual stimulus was temporally coherent with the masker, despite the visual stimulus conveying no task-relevant information. Here, we trained observers to detect audiovisual temporal coherence and asked whether this changed the way in which they were able to exploit visual information in the auditory selective attention task. We observed that after training, participants were able to benefit from temporal coherence between the visual stimulus and both the target and masker streams, relative to the condition in which the visual stimulus was coherent with neither sound. However, we did not observe such changes in a second group that were trained to discriminate modulation rate differences between temporally coherent audiovisual streams, although they did show an improvement in their overall performance. A control group did not change their performance between pretest and post-test and did not change how they exploited visual information. These results provide insights into how crossmodal experience may optimize multisensory integration
    corecore