1,103 research outputs found

    Single-trial multisensory memories affect later auditory and visual object discrimination.

    Get PDF
    Multisensory memory traces established via single-trial exposures can impact subsequent visual object recognition. This impact appears to depend on the meaningfulness of the initial multisensory pairing, implying that multisensory exposures establish distinct object representations that are accessible during later unisensory processing. Multisensory contexts may be particularly effective in influencing auditory discrimination, given the purportedly inferior recognition memory in this sensory modality. The possibility of this generalization and the equivalence of effects when memory discrimination was being performed in the visual vs. auditory modality were at the focus of this study. First, we demonstrate that visual object discrimination is affected by the context of prior multisensory encounters, replicating and extending previous findings by controlling for the probability of multisensory contexts during initial as well as repeated object presentations. Second, we provide the first evidence that single-trial multisensory memories impact subsequent auditory object discrimination. Auditory object discrimination was enhanced when initial presentations entailed semantically congruent multisensory pairs and was impaired after semantically incongruent multisensory encounters, compared to sounds that had been encountered only in a unisensory manner. Third, the impact of single-trial multisensory memories upon unisensory object discrimination was greater when the task was performed in the auditory vs. visual modality. Fourth, there was no evidence for correlation between effects of past multisensory experiences on visual and auditory processing, suggestive of largely independent object processing mechanisms between modalities. We discuss these findings in terms of the conceptual short term memory (CSTM) model and predictive coding. Our results suggest differential recruitment and modulation of conceptual memory networks according to the sensory task at hand

    The efficacy of single-trial multisensory memories.

    Get PDF
    This review article summarizes evidence that multisensory experiences at one point in time have long-lasting effects on subsequent unisensory visual and auditory object recognition. The efficacy of single-trial exposure to task-irrelevant multisensory events is its ability to modulate memory performance and brain activity to unisensory components of these events presented later in time. Object recognition (either visual or auditory) is enhanced if the initial multisensory experience had been semantically congruent and can be impaired if this multisensory pairing was either semantically incongruent or entailed meaningless information in the task-irrelevant modality, when compared to objects encountered exclusively in a unisensory context. Processes active during encoding cannot straightforwardly explain these effects; performance on all initial presentations was indistinguishable despite leading to opposing effects with stimulus repetitions. Brain responses to unisensory stimulus repetitions differ during early processing stages (-100 ms post-stimulus onset) according to whether or not they had been initially paired in a multisensory context. Plus, the network exhibiting differential responses varies according to whether or not memory performance is enhanced or impaired. The collective findings we review indicate that multisensory associations formed via single-trial learning exert influences on later unisensory processing to promote distinct object representations that manifest as differentiable brain networks whose activity is correlated with memory performance. These influences occur incidentally, despite many intervening stimuli, and are distinguishable from the encoding/learning processes during the formation of the multisensory associations. The consequences of multisensory interactions that persist over time to impact memory retrieval and object discrimination

    The role of auditory cortices in the retrieval of single-trial auditory-visual object memories.

    Get PDF
    Single-trial encounters with multisensory stimuli affect both memory performance and early-latency brain responses to visual stimuli. Whether and how auditory cortices support memory processes based on single-trial multisensory learning is unknown and may differ qualitatively and quantitatively from comparable processes within visual cortices due to purported differences in memory capacities across the senses. We recorded event-related potentials (ERPs) as healthy adults (n = 18) performed a continuous recognition task in the auditory modality, discriminating initial (new) from repeated (old) sounds of environmental objects. Initial presentations were either unisensory or multisensory; the latter entailed synchronous presentation of a semantically congruent or a meaningless image. Repeated presentations were exclusively auditory, thus differing only according to the context in which the sound was initially encountered. Discrimination abilities (indexed by d') were increased for repeated sounds that were initially encountered with a semantically congruent image versus sounds initially encountered with either a meaningless or no image. Analyses of ERPs within an electrical neuroimaging framework revealed that early stages of auditory processing of repeated sounds were affected by prior single-trial multisensory contexts. These effects followed from significantly reduced activity within a distributed network, including the right superior temporal cortex, suggesting an inverse relationship between brain activity and behavioural outcome on this task. The present findings demonstrate how auditory cortices contribute to long-term effects of multisensory experiences on auditory object discrimination. We propose a new framework for the efficacy of multisensory processes to impact both current multisensory stimulus processing and unisensory discrimination abilities later in time

    The multisensory function of the human primary visual cortex

    Get PDF
    It has been nearly 10 years since Ghazanfar and Schroeder (2006) proposed that the neocortex is essentially multisensory in nature. However, it is only recently that sufficient and hard evidence that supports this proposal has accrued. We review evidence that activity within the human primary visual cortex plays an active role in multisensory processes and directly impacts behavioural outcome. This evidence emerges from a full pallet of human brain imaging and brain mapping methods with which multisensory processes are quantitatively assessed by taking advantage of particular strengths of each technique as well as advances in signal analyses. Several general conclusions about multisensory processes in primary visual cortex of humans are supported relatively solidly. First, haemodynamic methods (fMRI/PET) show that there is both convergence and integration occurring within primary visual cortex. Second, primary visual cortex is involved in multisensory processes during early post-stimulus stages (as revealed by EEG/ERP/ERFs as well as TMS). Third, multisensory effects in primary visual cortex directly impact behaviour and perception, as revealed by correlational (EEG/ERPs/ERFs) as well as more causal measures (TMS/tACS). While the provocative claim of Ghazanfar and Schroeder (2006) that the whole of neocortex is multisensory in function has yet to be demonstrated, this can now be considered established in the case of the human primary visual cortex

    The context-contingent nature of cross-modal activations of the visual cortex

    Get PDF
    Real-world environments are nearly always multisensory in nature. Processing in such situations confers perceptual advantages, but its automaticity remains poorly understood. Automaticity has been invoked to explain the activation of visual cortices by laterally-presented sounds. This has been observed even when the sounds were task-irrelevant and spatially uninformative about subsequent targets. An auditory-evoked contralateral occipital positivity (ACOP) at ~250ms post-sound onset has been postulated as the event-related potential (ERP) correlate of this cross-modal effect. However, the spatial dimension of the stimuli was nevertheless relevant in virtually all prior studies where the ACOP was observed. By manipulating the implicit predictability of the location of lateralised sounds in a passive auditory paradigm, we tested the automaticity of cross-modal activations of visual cortices. 128-channel ERP data from healthy participants were analysed within an electrical neuroimaging framework. The timing, topography, and localisation resembled previous characterisations of the ACOP. However, the cross-modal activations of visual cortices by sounds were critically dependent on whether the sound location was (un)predictable. Our results are the first direct evidence that this particular cross-modal process is not (fully) automatic; instead, it is context-contingent. More generally, the present findings provide novel insights into the importance of context-related factors in controlling information processing across the senses, and call for a revision of current models of automaticity in cognitive sciences

    The COGs (context, object, and goals) in multisensory processing

    Get PDF
    Our understanding of how perception operates in real-world environments has been substantially advanced by studying both multisensory processes and “top-down” control processes influencing sensory processing via activity from higher-order brain areas, such as attention, memory, and expectations. As the two topics have been traditionally studied separately, the mechanisms orchestrating real-world multisensory processing remain unclear. Past work has revealed that the observer’s goals gate the influence of many multisensory processes on brain and behavioural responses, whereas some other multisensory processes might occur independently of these goals. Consequently, other forms of top-down control beyond goal dependence are necessary to explain the full range of multisensory effects currently reported at the brain and the cognitive level. These forms of control include sensitivity to stimulus context as well as the detection of matches (or lack thereof) between a multisensory stimulus and categorical attributes of naturalistic objects (e.g. tools, animals). In this review we discuss and integrate the existing findings that demonstrate the importance of such goal-, object- and context-based top-down control over multisensory processing. We then put forward a few principles emerging from this literature review with respect to the mechanisms underlying multisensory processing and discuss their possible broader implications

    Enriched learning : Behavior, brain, and computation

    Get PDF
    Open Access via the Elsevier Agreement Funder: German Research Foundation: KR 3735/3-1,MA 9552/1-1 Acknowledgments We thank Agnieszka Konopka, Antje Proske, Joost Rommers, and Anna Zamm for providing useful comments on an earlier version of the manuscript; Mingyuan Chu for feedback on Figure 1; and Stefan Kiebel for feedback on Box 3. This work was supported by the German Research Foundation (grants KR 3735/3-1, KR 3735/3-2, and MA 9552/1-1).Peer reviewedPublisher PD

    Multisensory Processes: A Balancing Act across the Lifespan.

    Get PDF
    Multisensory processes are fundamental in scaffolding perception, cognition, learning, and behavior. How and when stimuli from different sensory modalities are integrated rather than treated as separate entities is poorly understood. We review how the relative reliance on stimulus characteristics versus learned associations dynamically shapes multisensory processes. We illustrate the dynamism in multisensory function across two timescales: one long term that operates across the lifespan and one short term that operates during the learning of new multisensory relations. In addition, we highlight the importance of task contingencies. We conclude that these highly dynamic multisensory processes, based on the relative weighting of stimulus characteristics and learned associations, provide both stability and flexibility to brain functions over a wide range of temporal scales

    Novel methods to evaluate blindsight and develop rehabilitation strategies for patients with cortical blindness

    Full text link
    20 à 57 % des victimes d'un accident vasculaire cérébral (AVC) sont diagnostiqués aves des déficits visuels qui réduisent considérablement leur qualité de vie. Parmi les cas extrêmes de déficits visuels, nous retrouvons les cécités corticales (CC) qui se manifestent lorsque la région visuelle primaire (V1) est atteinte. Jusqu'à présent, il n'existe aucune approche permettant d'induire la restauration visuelle des fonctions et, dans la plupart des cas, la plasticité est insuffisante pour permettre une récupération spontanée. Par conséquent, alors que la perte de la vue est considérée comme permanente, des fonctions inconscientes mais importantes, connues sous le nom de vision aveugle (blindsight), pourraient être utiles pour les stratégies de réhabilitation visuelle, ce qui suscite un vif intérêt dans le domaine des neurosciences cognitives. La vision aveugle est un phénomène rare qui dépeint une dissociation entre la performance et la conscience, principalement étudiée dans des études de cas. Dans le premier chapitre de cette thèse, nous avons abordé plusieurs questions concernant notre compréhension de la vision aveugle. Comme nous le soutenons, une telle compréhension pourrait avoir une influence significative sur la réhabilitation clinique des patients souffrant de CC. Par conséquent, nous proposons une stratégie unique pour la réhabilitation visuelle qui utilise les principes du jeu vidéo pour cibler et potentialiser les mécanismes neuronaux dans le cadre de l'espace de travail neuronal global, qui est expliqué théoriquement dans l'étude 1 et décrit méthodologiquement dans l'étude 5. En d'autres termes, nous proposons que les études de cas, en conjonction avec des critères méthodologiques améliorés, puissent identifier les substrats neuronaux qui soutiennent la vision aveugle et inconsciente. Ainsi, le travail de cette thèse a fourni trois expériences empiriques (études 2, 3 et 4) en utilisant de nouveaux standards dans l'analyse électrophysiologique qui décrivent les cas de patients SJ présentant une cécité pour les scènes complexes naturelles affectives et ML présentant une cécité pour les stimuli de mouvement. Dans les études 2 et 3, nous avons donc sondé les substrats neuronaux sous-corticaux et corticaux soutenant la cécité affective de SJ en utilisant la MEG et nous avons comparé ces corrélats à sa perception consciente. L’étude 4 nous a permis de caractériser les substrats de la détection automatique des changements en l'absence de conscience visuelle, mesurée par la négativité de discordance (en anglais visual mismatch negativity : vMMN) chez ML et dans un groupe neurotypique. Nous concluons en proposant la vMMN comme biomarqueur neuronal du traitement inconscient dans la vision normale et altérée indépendante des évaluations comportementales. Grâce à ces procédures, nous avons pu aborder certains débats ouverts dans la littérature sur la vision aveugle et sonder l'existence de voies neurales secondaires soutenant le comportement inconscient. En conclusion, cette thèse propose de combiner les perspectives empiriques et cliniques en utilisant des avancées méthodologiques et de nouvelles méthodes pour comprendre et cibler les substrats neurophysiologiques sous-jacents à la vision aveugle. Il est important de noter que le cadre offert par cette thèse de doctorat pourrait aider les études futures à construire des outils thérapeutiques ciblés efficaces et des stratégies de réhabilitation multimodale.20 to 57% of victims of a cerebrovascular accident (CVA) develop visual deficits that considerably reduce their quality of life. Among the extreme cases of visual deficits, we find cortical blindness (CC) which manifests when the primary visual region (V1) is affected. Until now, there is no approach that induces restoration of visual function and in most cases, plasticity is insufficient to allow spontaneous recovery. Therefore, while sight loss is considered permanent, unconscious yet important functions, known as blindsight, could be of use for visual rehabilitation strategies raising strong interest in cognitive neurosciences. Blindsight is a rare phenomenon that portrays a dissociation between performance and consciousness mainly investigated in case reports. In the first chapter of this thesis, we’ve addressed multiple issues about our comprehension of blindsight and conscious perception. As we argue, such understanding might have a significant influence on clinical rehabilitation patients suffering from CB. Therefore, we propose a unique strategy for visual rehabilitation that uses video game principles to target and potentiate neural mechanisms within the global neuronal workspace framework, which is theoretically explained in study 1 and methodologically described in study 5. In other words, we propose that case reports, in conjunction with improved methodological criteria, might identify the neural substrates that support blindsight and unconscious processing. Thus, the work in this Ph.D. work provided three empirical experiments (studies 2, 3, and 4) that used new standards in electrophysiological analyses as they describe the cases of patients SJ presenting blindsight for affective natural complex scenes and ML presenting blindsight for motion stimuli. In studies 2 and 3, we probed the subcortical and cortical neural substrates supporting SJ’s affective blindsight using MEG as we compared these unconscious correlates to his conscious perception. Study 4 characterizes the substrates of automatic detection of changes in the absence of visual awareness as measured by the visual mismatch negativity (vMMN) in ML and a neurotypical group. We conclude by proposing the vMMN as a neural biomarker of unconscious processing in normal and altered vision independent of behavioral assessments. As a result of these procedures, we were able to address certain open debates in the blindsight literature and probe the existence of secondary neural pathways supporting unconscious behavior. In conclusion, this thesis proposes to combine empirical and clinical perspectives by using methodological advances and novel methods to understand and target the neurophysiological substrates underlying blindsight. Importantly, the framework offered by this doctoral dissertation might help future studies build efficient targeted therapeutic tools and multimodal rehabilitation training

    Hemodynamic responses in human multisensory and auditory association cortex to purely visual stimulation

    Get PDF
    BACKGROUND: Recent findings of a tight coupling between visual and auditory association cortices during multisensory perception in monkeys and humans raise the question whether consistent paired presentation of simple visual and auditory stimuli prompts conditioned responses in unimodal auditory regions or multimodal association cortex once visual stimuli are presented in isolation in a post-conditioning run. To address this issue fifteen healthy participants partook in a "silent" sparse temporal event-related fMRI study. In the first (visual control) habituation phase they were presented with briefly red flashing visual stimuli. In the second (auditory control) habituation phase they heard brief telephone ringing. In the third (conditioning) phase we coincidently presented the visual stimulus (CS) paired with the auditory stimulus (UCS). In the fourth phase participants either viewed flashes paired with the auditory stimulus (maintenance, CS-) or viewed the visual stimulus in isolation (extinction, CS+) according to a 5:10 partial reinforcement schedule. The participants had no other task than attending to the stimuli and indicating the end of each trial by pressing a button. RESULTS: During unpaired visual presentations (preceding and following the paired presentation) we observed significant brain responses beyond primary visual cortex in the bilateral posterior auditory association cortex (planum temporale, planum parietale) and in the right superior temporal sulcus whereas the primary auditory regions were not involved. By contrast, the activity in auditory core regions was markedly larger when participants were presented with auditory stimuli. CONCLUSION: These results demonstrate involvement of multisensory and auditory association areas in perception of unimodal visual stimulation which may reflect the instantaneous forming of multisensory associations and cannot be attributed to sensation of an auditory event. More importantly, we are able to show that brain responses in multisensory cortices do not necessarily emerge from associative learning but even occur spontaneously to simple visual stimulation
    corecore