7,507 research outputs found

    Who is that? Brain networks and mechanisms for identifying individuals

    Get PDF
    Social animals can identify conspecifics by many forms of sensory input. However, whether the neuronal computations that support this ability to identify individuals rely on modality-independent convergence or involve ongoing synergistic interactions along the multiple sensory streams remains controversial. Direct neuronal measurements at relevant brain sites could address such questions, but this requires better bridging the work in humans and animal models. Here, we overview recent studies in nonhuman primates on voice and face identity-sensitive pathways and evaluate the correspondences to relevant findings in humans. This synthesis provides insights into converging sensory streams in the primate anterior temporal lobe (ATL) for identity processing. Furthermore, we advance a model and suggest how alternative neuronal mechanisms could be tested

    Implicit Multisensory Associations Influence Voice Recognition

    Get PDF
    Natural objects provide partially redundant information to the brain through different sensory modalities. For example, voices and faces both give information about the speech content, age, and gender of a person. Thanks to this redundancy, multimodal recognition is fast, robust, and automatic. In unimodal perception, however, only part of the information about an object is available. Here, we addressed whether, even under conditions of unimodal sensory input, crossmodal neural circuits that have been shaped by previous associative learning become activated and underpin a performance benefit. We measured brain activity with functional magnetic resonance imaging before, while, and after participants learned to associate either sensory redundant stimuli, i.e. voices and faces, or arbitrary multimodal combinations, i.e. voices and written names, ring tones, and cell phones or brand names of these cell phones. After learning, participants were better at recognizing unimodal auditory voices that had been paired with faces than those paired with written names, and association of voices with faces resulted in an increased functional coupling between voice and face areas. No such effects were observed for ring tones that had been paired with cell phones or names. These findings demonstrate that brief exposure to ecologically valid and sensory redundant stimulus pairs, such as voices and faces, induces specific multisensory associations. Consistent with predictive coding theories, associative representations become thereafter available for unimodal perception and facilitate object recognition. These data suggest that for natural objects effective predictive signals can be generated across sensory systems and proceed by optimization of functional connectivity between specialized cortical sensory modules

    Post-training load-related changes of auditory working memory: An EEG study

    Get PDF
    Working memory (WM) refers to the temporary retention and manipulation of information, and its capacity is highly susceptible to training. Yet, the neural mechanisms that allow for increased performance under demanding conditions are not fully understood. We expected that post-training efficiency in WM performance modulates neural processing during high load tasks. We tested this hypothesis, using electroencephalography (EEG) (N = 39), by comparing source space spectral power of healthy adults performing low and high load auditory WM tasks. Prior to the assessment, participants either underwent a modality-specific auditory WM training, or a modality-irrelevant tactile WM training, or were not trained (active control). After a modality-specific training participants showed higher behavioral performance, compared to the control. EEG data analysis revealed general effects of WM load, across all training groups, in the theta-, alpha-, and beta-frequency bands. With increased load theta-band power increased over frontal, and decreased over parietal areas. Centro-parietal alpha-band power and central beta-band power decreased with load. Interestingly, in the high load condition a tendency toward reduced beta-band power in the right medial temporal lobe was observed in the modality-specific WM training group compared to the modality-irrelevant and active control groups. Our finding that WM processing during the high load condition changed after modality-specific WM training, showing reduced beta-band activity in voice-selective regions, possibly indicates a more efficient maintenance of task-relevant stimuli. The general load effects suggest that WM performance at high load demands involves complementary mechanisms, combining a strengthening of task-relevant and a suppression of task-irrelevant processing

    Cross-modal processing of voices and faces in developmental prosopagnosia and developmental phonagnosia

    Get PDF
    Conspecifics can be recognized from either the face or the voice alone. However, person identity information is rarely encountered in purely unimodal situations and there is increasing evidence that the face and voice interact in neurotypical identity processing. Conversely, developmental deficits have been observed that seem to be selective for face and voice recognition, developmental prosopagnosia and developmental phonagnosia, respectively. To date, studies on developmental prosopagnosia and phonagnosia have largely centred on within modality testing. Here, we review evidence from a small number of behavioural and neuroimaging studies which have examined the recognition of both faces and voices in these cohorts. A consensus from the findings is that, when tested in purely unimodal conditions, voice-identity processing appears normal in most cases of developmental prosopagnosia, as does face-identity processing in developmental phonagnosia. However, there is now first evidence that the multisensory nature of person identity impacts on identity recognition abilities in these cohorts. For example, unlike neurotypicals, auditory-only voice recognition is not enhanced in developmental prosopagnosia for voices which have been previously learned together with a face. This might also explain why the recognition of personally familiar voices is poorer in developmental prosopagnosics, compared to controls. In contrast, there is evidence that multisensory interactions might also lead to compensatory mechanisms in these disorders. For example, in developmental phonagnosia, voice recognition may be enhanced if voices have been learned with a corresponding face. Taken together, the reviewed findings challenge traditional models of person recognition which have assumed independence between face-identity and voice-identity processing and rather support an audio-visual model of human communication that assumes direction interactions between voice and face processing streams. In addition, the reviewed findings open up novel empirical research questions and have important implications for potential training regimes for developmental prosopagnosia and phonagnosia

    Selective Attention and Audiovisual Integration: Is Attending to Both Modalities a Prerequisite for Early Integration?

    Get PDF
    Interactions between multisensory integration and attention were studied using a combined audiovisual streaming design and a rapid serial visual presentation paradigm. Event-related potentials (ERPs) following audiovisual objects (AV) were compared with the sum of the ERPs following auditory (A) and visual objects (V). Integration processes were expressed as the difference between these AV and (A + V) responses and were studied while attention was directed to one or both modalities or directed elsewhere. Results show that multisensory integration effects depend on the multisensory objects being fully attended—that is, when both the visual and auditory senses were attended. In this condition, a superadditive audiovisual integration effect was observed on the P50 component. When unattended, this effect was reversed; the P50 components of multisensory ERPs were smaller than the unisensory sum. Additionally, we found an enhanced late frontal negativity when subjects attended the visual component of a multisensory object. This effect, bearing a strong resemblance to the auditory processing negativity, appeared to reflect late attention-related processing that had spread to encompass the auditory component of the multisensory object. In conclusion, our results shed new light on how the brain processes multisensory auditory and visual information, including how attention modulates multisensory integration processes

    Long-Term Consequences of Early Eye Enucleation on Audiovisual Processing

    Get PDF
    A growing body of research shows that complete deprivation of the visual system from the loss of both eyes early in life results in changes in the remaining senses. Is the adaptive plasticity observed in the remaining intact senses also found in response to partial sensory deprivation specifically, the loss of one eye early in life? My dissertation examines evidence of adaptive plasticity following the loss of one eye (unilateral enucleation) early in life. Unilateral eye enucleation is a unique model for examining the consequences of the loss of binocularity since the brain is completely deprived of all visual input from that eye. My dissertation expands our understanding of the long-term effects of losing one eye early in life on the development of audiovisual processing both behaviourally and in terms of the underlying neural representation. The over-arching goal is to better understand neural plasticity as a result of sensory deprivation. To achieve this I conducted seven experiments, divided into 5 experimental chapters, that focus on the behavioural and structural correlates of audiovisual perception in a unique group of adults who lost one eye in the first few years of life. Behavioural data (Chapters II-V) in conjunction with neuroimaging data (Chapter VI) relate structure and function of the auditory, visual and audiovisual systems in this rare patient group allowing a more refined understanding of cross sensory effects of early sensory deprivation. This information contributes to us better understanding how audiovisual information is experienced by people with one eye. This group can be used as a model to learn how to accommodate and maintain the health of less extreme forms of visual deprivation and to promote overall long-term visual health

    Auditory Experiences in Game Transfer Phenomena:

    Get PDF
    This study investigated gamers’ auditory experiences as after effects of playing. This was done by classifying, quantifying, and analysing 192 experiences from 155 gamers collected from online videogame forums. The gamers’ experiences were classified as: (i) auditory imagery (e.g., constantly hearing the music from the game), (ii) inner speech (e.g., completing phrases in the mind), (iii) auditory misperceptions (e.g., confusing real life sounds with videogame sounds), and (iv) multisensorial auditory experiences (e.g., hearing music while involuntary moving the fingers). Gamers heard auditory cues from the game in their heads, in their ears, but also coming from external sources. Occasionally, the vividness of the sound evoked thoughts and emotions that resulted in behaviours and copying strategies. The psychosocial implications of the gamers’ auditory experiences are discussed. This study contributes to the understanding of the effects of auditory features in videogames, and to the phenomenology of non-volitional experiences (e.g., auditory imagery, auditory hallucinations)

    Single-trial multisensory memories affect later auditory and visual object discrimination.

    Get PDF
    Multisensory memory traces established via single-trial exposures can impact subsequent visual object recognition. This impact appears to depend on the meaningfulness of the initial multisensory pairing, implying that multisensory exposures establish distinct object representations that are accessible during later unisensory processing. Multisensory contexts may be particularly effective in influencing auditory discrimination, given the purportedly inferior recognition memory in this sensory modality. The possibility of this generalization and the equivalence of effects when memory discrimination was being performed in the visual vs. auditory modality were at the focus of this study. First, we demonstrate that visual object discrimination is affected by the context of prior multisensory encounters, replicating and extending previous findings by controlling for the probability of multisensory contexts during initial as well as repeated object presentations. Second, we provide the first evidence that single-trial multisensory memories impact subsequent auditory object discrimination. Auditory object discrimination was enhanced when initial presentations entailed semantically congruent multisensory pairs and was impaired after semantically incongruent multisensory encounters, compared to sounds that had been encountered only in a unisensory manner. Third, the impact of single-trial multisensory memories upon unisensory object discrimination was greater when the task was performed in the auditory vs. visual modality. Fourth, there was no evidence for correlation between effects of past multisensory experiences on visual and auditory processing, suggestive of largely independent object processing mechanisms between modalities. We discuss these findings in terms of the conceptual short term memory (CSTM) model and predictive coding. Our results suggest differential recruitment and modulation of conceptual memory networks according to the sensory task at hand

    Detection of Modality-Specific Properties in Unimodal and Bimodal Events during Prenatal Development

    Get PDF
    Predictions of the Intersensory Redundancy Hypothesis (IRH) state that early in development information presented to a single sense modality (unimodal) selectively recruits attention to and enhances perceptual learning of modality-specific properties of stimulation at the expense of amodal properties, while information presented redundantly across two or more modalities (bimodal) results in enhanced perceptual learning of amodal properties. The present study explored these predictions during prenatal development by assessing bobwhite quail embryos’ detection of pitch, a modality-specific property, under conditions of unimodal and redundant bimodal stimulation. Chicks’ postnatal auditory preferences between the familiarized call and the same call with altered pitch were assessed following hatching. Unimodally-exposed chicks significantly preferred the familiarized call over the pitch-modified call, whereas bimodally-exposed chicks did not prefer the familiar call over the pitch-modified call. Results confirm IRH predictions, demonstrating unimodal exposure facilitates learning of modality-specific properties, whereas redundant bimodal stimulation interferes with learning of modality-specific properties
    corecore