3,136 research outputs found

    The sources of dual-task costs in multisensory WM tasks

    Get PDF
    We investigated the sources of dual-task costs arising in multisensory working memory (WM) tasks, where stimuli from different modalities have to be simultaneously maintained. Performance decrements relative to unimodal single-task baselines have been attributed to a modality-unspecific central WM store, but such costs could also reflect increased demands on central executive processes involved in dual-task coordination. To compare these hypotheses, we asked participants to maintain 2, 3 or 4 visual items. Unimodal trials, where only this visual task was performed, and bimodal trials, where a concurrent tactile WM task required the additional maintenance of 2 tactile items, were randomly intermixed. We measured the visual and tactile contralateral delay activity (CDA/tCDA components) as markers of WM maintenance in visual and somatosensory areas. There were reliable dual-task costs, as visual CDA components were reduced in size and visual WM accuracy was impaired on bimodal relative to unimodal trials. However, these costs did not depend on visual load, which caused identical CDA modulations in unimodal and bimodal trials, suggesting that memorizing tactile items did not reduce the number of visual items that could be maintained. Visual load did also not affect tCDA amplitudes. These findings indicate that bimodal dual-task costs do not result from a competition between multisensory items for shared storage capacity. Instead, these costs reflect generic limitations of executive control mechanisms that coordinate multiple cognitive processes in dual-tasks. Our results support hierarchical models of WM, where distributed maintenance processes with modality-specific capacity limitations are controlled by a central executive mechanism

    Independent attention mechanisms control the activation of tactile and visual working memory representations

    Get PDF
    Working memory (WM) is limited in capacity, but it is controversial whether these capacity limitations are domain-general or are generated independently within separate modality-specific memory systems. These alternative accounts were tested in bimodal visual/tactile WM tasks. In Experiment 1, participants memorized the locations of simultaneously presented task-relevant visual and tactile stimuli. Visual and tactile WM load was manipulated independently (1, 2 or 3 items per modality), and one modality was unpredictably tested after each trial. To track the activation of visual and tactile WM representations during the retention interval, the visual and tactile contralateral delay activity (CDA and tCDA) were measured over visual and somatosensory cortex, respectively. CDA and tCDA amplitudes were selectively affected by WM load in the corresponding (tactile or visual) modality. The CDA parametrically increased when visual load increased from 1 to 2 and to 3 items. The tCDA was enhanced when tactile load increased from 1 to 2 items, and showed no further enhancement for 3 tactile items. Critically, these load effects were strictly modality-specific, as substantiated by Bayesian statistics. Increasing tactile load did not affect the visual CDA, and increasing visual load did not modulate the tCDA. Task performance at memory test was also unaffected by WM load in the other (untested) modality. This was confirmed in a second behavioral experiment where tactile and visual loads were either two or four items, unimodal baseline conditions were included, and participants performed a color change detection task in the visual modality. These results show that WM capacity is not limited by a domain-general mechanism that operates across sensory modalities. They suggest instead that WM storage is mediated by distributed modality-specific control mechanisms that are activated independently and in parallel during multisensory WM

    Multisensory perception and decision-making with a new sensory skill

    Get PDF
    It is clear that people can learn a new sensory skill – a new way of mapping sensory inputs onto world states. It remains unclear how flexibly a new sensory skill can become embedded in multisensory perception and decision-making. To address this, we trained typically-sighted participants (N=12) to use a new echo-like auditory cue to distance in a virtual world, together with a noisy visual cue. Using model-based analyses, we tested for key markers of efficient multisensory perception and decision-making with the new skill. We found that twelve of fourteen participants learned to judge distance using the novel auditory cue. Their use of this new sensory skill showed three key features: (1) it enhanced the speed of timed decisions; (2) it largely resisted interference from a simultaneous digit span task; and (3) it integrated with vision in a Bayes-like manner to improve precision. We also show some limits following this relatively short training: precision benefits were lower than the Bayesoptimal prediction, and there was no forced fusion of signals. We conclude that people already embed new sensory skills in flexible multisensory perception and decision-making after a short training period. A key application of these insights is to the development of sensory augmentation systems that can enhance human perceptual abilities in novel ways. The limitations we reveal (sub-optimality, lack of fusion) provide a foundation for further investigations of the limits of these abilities and their brain basis

    Shifts of spatial attention in visual and tactile working memory are controlled by independent modality-specific mechanisms

    Get PDF
    The question whether the attentional control of working memory (WM) is shared across sensory modalities remains controversial. Here, we investigated whether attention shifts in visual and tactile WM are regulated independently. Participants memorized visual and tactile targets in a first memory sample set (S1) before encoding targets in a second sample set (S2). Importantly, visual or tactile S2 targets could appear on the same side as the corresponding S1 targets, or on opposite sides, thus requiring shifts of spatial attention in visual or tactile WM. The activation of WM representations in modality-specific visual and somatosensory areas was tracked by recording visual and tactile contralateral delay activity (CDA/tCDA). CDA/tCDA components emerged contralateral to the side of visual or tactile S1 targets, and reversed polarity when S2 targets in the same modality appeared on the opposite side. Critically, the visual CDA was unaffected by the presence versus absence of concurrent attention shifts in tactile WM, and the tactile CDA remained insensitive to visual attention shifts. Visual and tactile WM performance was also not modulated by attention shifts in the other modality. These results show that the dynamic control of visual and tactile WM activation processes operates in an independent modality-specific fashion

    Age-Related Differences in Multimodal Information Processing and Their Implications for Adaptive Display Design.

    Full text link
    In many data-rich, safety-critical environments, such as driving and aviation, multimodal displays (i.e., displays that present information in visual, auditory, and tactile form) are employed to support operators in dividing their attention across numerous tasks and sources of information. However, limitations of this approach are not well understood. Specifically, most research on the effectiveness of multimodal interfaces has examined the processing of only two concurrent signals in different modalities, primarily in vision and hearing. Also, nearly all studies to date have involved young participants only. The goals of this dissertation were therefore to (1) determine the extent to which people can notice and process three unrelated concurrent signals in vision, hearing and touch, (2) examine how well aging modulates this ability, and (3) develop countermeasures to overcome observed performance limitations. Adults aged 65+ years were of particular interest because they represent the fastest growing segment of the U.S. population, are known to suffer from various declines in sensory abilities, and experience difficulties with divided attention. Response times and incorrect response rates to singles, pairs, and triplets of visual, auditory, and tactile stimuli were significantly higher for older adults, compared to younger participants. In particular, elderly participants often failed to notice the tactile signal when all three cues were combined. They also frequently falsely reported the presence of a visual cue when presented with a combination of auditory and tactile cues. These performance breakdowns were observed both in the absence and presence of a concurrent visual/manual (driving) task. Also, performance on the driving task suffered the most for older adult participants and with the combined visual-auditory-tactile stimulation. Introducing a half-second delay between two stimuli significantly increased response accuracy for older adults. This work adds to the knowledge base in multimodal information processing, the perceptual and attentional abilities and limitations of the elderly, and adaptive display design. From an applied perspective, these results can inform the design of multimodal displays and enable aging drivers to cope with increasingly data-rich in-vehicle technologies. The findings are expected to generalize and thus contribute to improved overall public safety in a wide range of complex environments.PhDIndustrial and Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133203/1/bjpitts_1.pd

    Cross-modal interference-control is reduced in childhood but maintained in aging: a cohort study of stimulus-and response-interference in cross-modal and unimodal Stroop tasks

    Get PDF
    Interference-control is the ability to exclude distractions and focus on a specific task or stimulus. However, it is currently unclear whether the same interference-control mechanisms underlie the ability to ignore unimodal and cross-modal distractions. In two experiments we assessed whether unimodal and cross-modal interference follow similar trajectories in development and aging and occur at similar processing levels. In Experiment 1, 42 children(6-11 years), 31 younger adults (18-25 years) and 32 older adults (60-84 years) identified colour rectangles with either written (unimodal) or spoken (cross-modal) distractor-words. Stimuli could be congruent, incongruent but mapped to the same response (stimulus-incongruent), or incongruent and mapped to different responses (response-incongruent), thus separating interference occurring at early (sensory) and late (response) processing levels. Unimodal interference was worst in childhood and old age; however, older adults maintained the ability to ignore cross-modal distraction. Unimodal but not cross-modal response interference also reduced accuracy. In Experiment 2 we compared the effect of audition on vision and vice versa in 52 children (6-11 years), 30 young adults (22-33 years) and 30 older adults (60-84 years). As in Experiment 1, older adults maintained the ability to ignore cross-modal distraction arising from either modality and neither type of cross-modal distraction limited accuracy in adults. However cross-modal distraction still reduced accuracy in children and children were more slowed by stimulus-interference compared with adults. We conclude that; unimodal and cross-modal interference follow different lifespan trajectories and differences in stimulus- and response-interference may increase cross-modal distractibility in childhood

    Semantic Bimodal Presentation Differentially Slows Working Memory Retrieval

    Get PDF
    Although evidence has shown that working memory (WM) can be differentially affected by the multisensory congruency of different visual and auditory stimuli, it remains unclear whether different multisensory congruency about concrete and abstract words could impact further WM retrieval. By manipulating the attention focus toward different matching conditions of visual and auditory word characteristics in a 2-back paradigm, the present study revealed that for the characteristically incongruent condition under the auditory retrieval condition, the response to abstract words was faster than that to concrete words, indicating that auditory abstract words are not affected by visual representation, while auditory concrete words are. Alternatively, for concrete words under the visual retrieval condition, WM retrieval was faster in the characteristically incongruent condition than in the characteristically congruent condition, indicating that visual representation formed by auditory concrete words may interfere with WM retrieval of visual concrete words. The present findings demonstrated that concrete words in multisensory conditions may be too aggressively encoded with other visual representations, which would inadvertently slow WM retrieval. However, abstract words seem to suppress interference better, showing better WM performance than concrete words in the multisensory condition

    Effects of Multimodal Load on Spatial Monitoring as Revealed by ERPs

    Get PDF
    While the role of selective attention in filtering out irrelevant information has been extensively studied, its characteristics and neural underpinnings when multiple environmental stimuli have to be processed in parallel are much less known. Building upon a dual-task paradigm that induced spatial awareness deficits for contralesional hemispace in right hemisphere-damaged patients, we investigated the electrophysiological correlates of multimodal load during spatial monitoring in healthy participants. The position of appearance of briefly presented, lateralized targets had to be reported either in isolation (single task) or together with a concurrent task, visual or auditory, which recruited additional attentional resources (dual-task). This top-down manipulation of attentional load, without any change of the sensory stimulation, modulated the amplitude of the first positive ERP response (P1) and shifted its neural generators, with a suppression of the signal in the early visual areas during both visual and auditory dual tasks. Furthermore, later N2 contralateral components elicited by left targets were particularly influenced by the concurrent visual task and were related to increased activation of the supramarginal gyrus. These results suggest that the right hemisphere is particularly affected by load manipulations, and confirm its crucial role in subtending automatic orienting of spatial attention and in monitoring both hemispaces

    The influence of auditory and contextual representations on visual working memory

    Get PDF

    Neural Underpinnings of Walking Under Cognitive and Sensory Load: A Mobile Brain/Body Imaging Approach

    Full text link
    Dual-task walking studies, in which individuals engage in an attentionally-demanding task while walking, have provided indirect evidence via behavioral and biomechanical measures, of the recruitment of higher-level cortical resources during gait. Additionally, recent EEG and imaging (PET, fNIRS) studies have revealed direct neurophysiological evidence of cortical contributions to steady-state walking. However, there remains a lack of knowledge regarding the underlying neural mechanisms involved in the allocation of cortical resources while walking under increased load. This dissertation presents three experiments designed to provide a greater understanding of the cortical dynamics implicated in processing load (top-down or bottom-up) during locomotion. Furthermore, we seek to investigate age-related differences in these neural pathways. These studies were conducted using an innovative EEG-based Mobile Brain/Body Imaging (MoBI) approach, combining high-density EEG, foot force sensors and 3D body motion capture as participants walked on a treadmill. The first study employed a Go/No-Go response inhibition task to evaluate the long-term test-retest reliability of two cognitively-evoked event-related potentials (ERPs), the earlier N2 and the later P3. Acceptable levels of reliability were found, according to the intraclass correlation coefficient (ICC), and these were similar across sitting and walking conditions. Results indicate that electrocortical signals obtained during walking are stable indices of neurophysiological function. The aim of the second study was to characterize age-related changes in gait and in the allocation of cognitive control under single vs. dual-task load. For young adults, we observed significant modulations as a result of increased task load for both gait (longer stride time) and for ERPs (decreased N2 amplitude and P3 latency). In contrast, older adults exhibited costs in the cognitive domain (reduced accuracy performance), engaged in a more stereotyped pattern of walking, and showed a general lack of ERP modulation while walking under increased load, all of which may indicate reduced flexibility in resource allocation across tasks. Finally, the third study assessed the effects of sensory (optic flow and visual perturbations) and cognitive load (Go/No-Go task) manipulations on gait and cortical neuro-oscillatory activity in young adults. While walking under increased load, participants adopted a more conservative pattern of gait by taking shorter and wider strides, with cognitive load in particular associated with reduced motor variability. Using an Independent Component Analysis (ICA) and dipole-fitting approach, neuro-oscillatory activity was then calculated from eight source-localized clusters of Independent Components (ICs). Significant modulations in average spectral power in the theta (3-7Hz), alpha (8-12Hz), beta (13-30Hz), and gamma (31-45Hz) frequency bands were observed over occipital, parietal and frontal clusters of ICs, as a function of optic flow and task load. Overall, our findings demonstrate the reliability and feasibility of the MoBI approach to assess electrocortical activity in dual-task walking situations, and may be especially relevant to older adults who are less able to flexibly adjust to ongoing cognitive and sensory demands while walking
    • …
    corecore