28,607 research outputs found

    Age differences in fMRI adaptation for sound identity and location

    Get PDF
    We explored age differences in auditory perception by measuring fMRI adaptation of brain activity to repetitions of sound identity (what) and location (where), using meaningful environmental sounds. In one condition, both sound identity and location were repeated allowing us to assess non-specific adaptation. In other conditions, only one feature was repeated (identity or location) to assess domain-specific adaptation. Both young and older adults showed comparable non-specific adaptation (identity and location) in bilateral temporal lobes, medial parietal cortex, and subcortical regions. However, older adults showed reduced domain-specific adaptation to location repetitions in a distributed set of regions, including frontal and parietal areas, and to identity repetition in anterior temporal cortex. We also re-analyzed data from a previously published 1-back fMRI study, in which participants responded to infrequent repetition of the identity or location of meaningful sounds. This analysis revealed age differences in domain-specific adaptation in a set of brain regions that overlapped substantially with those identified in the adaptation experiment. This converging evidence of reductions in the degree of auditory fMRI adaptation in older adults suggests that the processing of specific auditory “what” and “where” information is altered with age, which may influence cognitive functions that depend on this processing

    Selective Attention and Audiovisual Integration: Is Attending to Both Modalities a Prerequisite for Early Integration?

    Get PDF
    Interactions between multisensory integration and attention were studied using a combined audiovisual streaming design and a rapid serial visual presentation paradigm. Event-related potentials (ERPs) following audiovisual objects (AV) were compared with the sum of the ERPs following auditory (A) and visual objects (V). Integration processes were expressed as the difference between these AV and (A + V) responses and were studied while attention was directed to one or both modalities or directed elsewhere. Results show that multisensory integration effects depend on the multisensory objects being fully attended—that is, when both the visual and auditory senses were attended. In this condition, a superadditive audiovisual integration effect was observed on the P50 component. When unattended, this effect was reversed; the P50 components of multisensory ERPs were smaller than the unisensory sum. Additionally, we found an enhanced late frontal negativity when subjects attended the visual component of a multisensory object. This effect, bearing a strong resemblance to the auditory processing negativity, appeared to reflect late attention-related processing that had spread to encompass the auditory component of the multisensory object. In conclusion, our results shed new light on how the brain processes multisensory auditory and visual information, including how attention modulates multisensory integration processes

    Top-down effects on early visual processing in humans: a predictive coding framework

    Get PDF
    An increasing number of human electroencephalography (EEG) studies examining the earliest component of the visual evoked potential, the so-called C1, have cast doubts on the previously prevalent notion that this component is impermeable to top-down effects. This article reviews the original studies that (i) described the C1, (ii) linked it to primary visual cortex (V1) activity, and (iii) suggested that its electrophysiological characteristics are exclusively determined by low-level stimulus attributes, particularly the spatial position of the stimulus within the visual field. We then describe conflicting evidence from animal studies and human neuroimaging experiments and provide an overview of recent EEG and magnetoencephalography (MEG) work showing that initial V1 activity in humans may be strongly modulated by higher-level cognitive factors. Finally, we formulate a theoretical framework for understanding top-down effects on early visual processing in terms of predictive coding

    Laminar fMRI: applications for cognitive neuroscience

    Get PDF
    The cortex is a massively recurrent network, characterized by feedforward and feedback connections between brain areas as well as lateral connections within an area. Feedforward, horizontal and feedback responses largely activate separate layers of a cortical unit, meaning they can be dissociated by lamina-resolved neurophysiological techniques. Such techniques are invasive and are therefore rarely used in humans. However, recent developments in high spatial resolution fMRI allow for non-invasive, in vivo measurements of brain responses specific to separate cortical layers. This provides an important opportunity to dissociate between feedforward and feedback brain responses, and investigate communication between brain areas at a more fine- grained level than previously possible in the human species. In this review, we highlight recent studies that successfully used laminar fMRI to isolate layer-specific feedback responses in human sensory cortex. In addition, we review several areas of cognitive neuroscience that stand to benefit from this new technological development, highlighting contemporary hypotheses that yield testable predictions for laminar fMRI. We hope to encourage researchers with the opportunity to embrace this development in fMRI research, as we expect that many future advancements in our current understanding of human brain function will be gained from measuring lamina-specific brain responses

    Some investigations into non passive listening

    Get PDF
    Our knowledge of the function of the auditory nervous system is based upon a wealth of data obtained, for the most part, in anaesthetised animals. More recently, it has been generally acknowledged that factors such as attention profoundly modulate the activity of sensory systems and this can take place at many levels of processing. Imaging studies, in particular, have revealed the greater activation of auditory areas and areas outside of sensory processing areas when attending to a stimulus. We present here a brief review of the consequences of such non-passive listening and go on to describe some of the experiments we are conducting to investigate them. In imaging studies, using fMRI, we can demonstrate the activation of attention networks that are non-specific to the sensory modality as well as greater and different activation of the areas of the supra-temporal plane that includes primary and secondary auditory areas. The profuse descending connections of the auditory system seem likely to be part of the mechanisms subserving attention to sound. These are generally thought to be largely inactivated by anaesthesia. However, we have been able to demonstrate that even in an anaesthetised preparation, removing the descending control from the cortex leads to quite profound changes in the temporal patterns of activation by sounds in thalamus and inferior colliculus. Some of these effects seem to be specific to the ear of stimulation and affect interaural processing. To bridge these observations we are developing an awake behaving preparation involving freely moving animals in which it will be possible to investigate the effects of consciousness (by contrasting awake and anaesthetized), passive and active listening

    Multisensory Congruency as a Mechanism for Attentional Control over Perceptual Selection

    Get PDF
    The neural mechanisms underlying attentional selection of competing neural signals for awareness remains an unresolved issue. We studied attentional selection, using perceptually ambiguous stimuli in a novel multisensory paradigm that combined competing auditory and competing visual stimuli. We demonstrate that the ability to select, and attentively hold, one of the competing alternatives in either sensory modality is greatly enhanced when there is a matching cross-modal stimulus. Intriguingly, this multimodal enhancement of attentional selection seems to require a conscious act of attention, as passively experiencing the multisensory stimuli did not enhance control over the stimulus. We also demonstrate that congruent auditory or tactile information, and combined auditory–tactile information, aids attentional control over competing visual stimuli and visa versa. Our data suggest a functional role for recently found neurons that combine voluntarily initiated attentional functions across sensory modalities. We argue that these units provide a mechanism for structuring multisensory inputs that are then used to selectively modulate early (unimodal) cortical processing, boosting the gain of task-relevant features for willful control over perceptual awareness

    Asymmetric spatial processing under cognitive load

    Get PDF
    Spatial attention allows us to selectively process information within a certain location in space. Despite the vast literature on spatial attention, the effect of cognitive load on spatial processing is still not fully understood. In this study we added cognitive load to a spatial processing task, so as to see whether it would differentially impact upon the processing of visual information in the left versus the right hemispace. The main paradigm consisted of a detection task that was performed during the maintenance interval of a verbal working memory task. We found that increasing cognitive working memory load had a more negative impact on detecting targets presented on the left side compared to those on the right side. The strength of the load effect correlated with the strength of the interaction on an individual level. The implications of an asymmetric attentional bias with a relative disadvantage for the left (vs the right) hemispace under high verbal working memory (WM) load are discussed

    Speaker Normalization Using Cortical Strip Maps: A Neural Model for Steady State vowel Categorization

    Full text link
    Auditory signals of speech are speaker-dependent, but representations of language meaning are speaker-independent. The transformation from speaker-dependent to speaker-independent language representations enables speech to be learned and understood from different speakers. A neural model is presented that performs speaker normalization to generate a pitch-independent representation of speech sounds, while also preserving information about speaker identity. This speaker-invariant representation is categorized into unitized speech items, which input to sequential working memories whose distributed patterns can be categorized, or chunked, into syllable and word representations. The proposed model fits into an emerging model of auditory streaming and speech categorization. The auditory streaming and speaker normalization parts of the model both use multiple strip representations and asymmetric competitive circuits, thereby suggesting that these two circuits arose from similar neural designs. The normalized speech items are rapidly categorized and stably remembered by Adaptive Resonance Theory circuits. Simulations use synthesized steady-state vowels from the Peterson and Barney [J. Acoust. Soc. Am. 24, 175-184 (1952)] vowel database and achieve accuracy rates similar to those achieved by human listeners. These results are compared to behavioral data and other speaker normalization models.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    3D audio as an information-environment: manipulating perceptual significance for differntiation and pre-selection

    Get PDF
    Contemporary use of sound as artificial information display is rudimentary, with little 'depth of significance' to facilitate users' selective attention. We believe that this is due to conceptual neglect of 'context' or perceptual background information. This paper describes a systematic approach to developing 3D audio information environments that utilise known cognitive characteristics, in order to promote rapidity and ease of use. The key concepts are perceptual space, perceptual significance, ambience labelling information and cartoonification
    corecore