259 research outputs found

    Single-trial multisensory memories affect later auditory and visual object discrimination.

    Get PDF
    Multisensory memory traces established via single-trial exposures can impact subsequent visual object recognition. This impact appears to depend on the meaningfulness of the initial multisensory pairing, implying that multisensory exposures establish distinct object representations that are accessible during later unisensory processing. Multisensory contexts may be particularly effective in influencing auditory discrimination, given the purportedly inferior recognition memory in this sensory modality. The possibility of this generalization and the equivalence of effects when memory discrimination was being performed in the visual vs. auditory modality were at the focus of this study. First, we demonstrate that visual object discrimination is affected by the context of prior multisensory encounters, replicating and extending previous findings by controlling for the probability of multisensory contexts during initial as well as repeated object presentations. Second, we provide the first evidence that single-trial multisensory memories impact subsequent auditory object discrimination. Auditory object discrimination was enhanced when initial presentations entailed semantically congruent multisensory pairs and was impaired after semantically incongruent multisensory encounters, compared to sounds that had been encountered only in a unisensory manner. Third, the impact of single-trial multisensory memories upon unisensory object discrimination was greater when the task was performed in the auditory vs. visual modality. Fourth, there was no evidence for correlation between effects of past multisensory experiences on visual and auditory processing, suggestive of largely independent object processing mechanisms between modalities. We discuss these findings in terms of the conceptual short term memory (CSTM) model and predictive coding. Our results suggest differential recruitment and modulation of conceptual memory networks according to the sensory task at hand

    Oscillatory neuronal dynamics during lexical-semantic retrieval and integration

    Get PDF
    Current models of language processing advocate that word meaning is partially stored in distributed modality-specific cortical networks. However, while much has been done to investigate where information is represented in the brain, the neuronal dynamics underlying how these networks communicate internally, and with each other are still poorly understood. For example, it is not clear how spatially distributed semantic content is integrated into a coherent conceptual representation. The current thesis investigates how perceptual semantic features are selected and integrated, using oscillatory neuronal dynamics. Cortical oscillations reflect synchronized activity in large neuronal populations that are associated with specific classes of network interactions. The first part of the present thesis addresses how perceptual semantic features are selected in long-term memory. Using electroencephalographic (EEG) recordings, it is demonstrated that retrieving perceptually more complex information is associated with a reduction in oscillatory power, which is in line with the information via desynchronization hypothesis, a recent neurophysiological model for long-term memory retrieval. The second, and third part address how distributed semantic content is integrated and coordinated in the brain. Behavioral evidence suggests that integrating two features of a target word (e.g., Whistle) during a dual property verification task, incurs an additional processing cost if features are from different (visual: tiny, audio: loud), rather than the same modality (visual: tiny, silver). Furthermore, EEG recordings reveal that integrating cross-modal feature pairs is associated with a more sustained low-frequency theta power increase in the left anterior temporal lobe (ATL). The ATL is thought to converge semantic content from different modalities. In line with this notion, ATL is shown to communicate with a widely distributed cortical network at the theta frequency. The fourth part of the thesis uses magnetoencephalographic (MEG) recordings to show that, while low frequency theta oscillations in left ATL are more sensitive to integrating features from different modalities, integrating two features from the same modality induces an early increase in high frequency gamma power in left ATL and modality-specific regions. These results are in line with a recent framework suggesting that local, and long-range network dynamics are reflected in different oscillatory frequencies. The fifth part demonstrates that the connection weights between left ATL and modality-specific regions at the theta frequency are modulated consistently with the content of the word (e.g., visual features enhance connectivity between left ATL and left inferior occipital cortex). The thesis concludes by embedding these results in the context of current neurocognitive models of semantic processing

    Psychologie und Gehirn 2007

    Get PDF
    Die Fachtagung "Psychologie und Gehirn" ist eine traditionelle Tagung aus dem Bereich psychophysiologischer Grundlagenforschung. 2007 fand diese Veranstaltung, die 33. Jahrestagung der „Deutschen Gesellschaft für Psychophysiologie und ihre Anwendungen (DGPA)“, in Dortmund unter der Schirmherrschaft des Instituts für Arbeitsphysiologie (IfADo) statt. Neben der Grundlagenforschung ist auch die Umsetzung in die Anwendung erklärtes Ziel der DGPA und dieser Tradition folgend wurden Beiträge aus vielen Bereichen moderner Neurowissenschaft (Elektrophysiologie, bildgebende Verfahren, Peripherphysiologie, Neuroendokrinologie, Verhaltensgenetik, u.a.) präsentiert und liegen hier in Kurzform vor

    The Role of Gamma Oscillations During Integration of Metaphoric Gestures and Abstract Speech

    Get PDF
    Metaphoric (MP) co-speech gestures are commonly used during daily communication. They communicate about abstract information by referring to gestures that are clearly concrete (e.g., raising a hand for “the level of the football game is high”). To understand MP co-speech gestures, a multisensory integration at semantic level is necessary between abstract speech and concrete gestures. While semantic gesture-speech integration has been extensively investigated using functional magnetic resonance imaging, evidence from electroencephalography (EEG) is rare. In the current study, we set out an EEG experiment, investigating the processing of MP vs. iconic (IC) co-speech gestures in different contexts, to reveal the oscillatory signature of MP gesture integration. German participants (n = 20) viewed video clips with an actor performing both types of gestures, accompanied by either comprehensible German or incomprehensible Russian (R) speech, or speaking German sentences without any gestures. Time-frequency analysis of the EEG data showed that, when gestures were accompanied by comprehensible German speech, MP gestures elicited decreased gamma band power (50–70 Hz) between 500 and 700 ms in the parietal electrodes when compared to IC gestures, and the source of this effect was localized to the right middle temporal gyrus. This difference is likely to reflect integration processes, as it was reduced in the R language and no-gesture conditions. Our findings provide the first empirical evidence suggesting the functional relationship between gamma band oscillations and higher-level semantic processes in a multisensory setting

    An ERP N400 study: Semantic processing across modalities in the human brain

    Get PDF
    Semantic processing in the brain of both language and action has been associated with the N400; an event-related potential (ERP) that is typically present when information has violated one’s semantic expectations. We know that the brain receives and processes information from multiple modalities which means that cross-modal semantic processing is more aligned with how semantic processing is likely to occur in comparison to unimodal semantic processing. The N400 is a consistent effect across cross-modal language-based studies. But we know that the topographical distribution of the N400 does vary between language-based and non-language-based paradigms. Therefore, to investigate cross-modal semantic processing within the action domain, we presented participants with photographs portraying the implementation of common actions. These sequences concluded with a sound that was either congruent or incongruent with the prior action photographs. In our ERP study of 25 participants, with a mean age of 23 years (SD = 10.78 years), we¹ found an N400 effect for incongruent information processing. In addition, our findings showed a delayed N400 effect and a reduced P200 amplitude for incongruent information. These results suggest that cross-modal semantic processing of action sequences requires an increased cognitive workload which is evidenced when semantic processing does not progress as expected. Considered as a whole, these results indicate that cross-modal semantic processing is similar to unimodal processing. Cross-modal information likely requires the involvement of additional cognitive processes that are not present during unimodal paradigms

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Using sentence context and mouth cues to aid speech comprehension: an electroencephalographic study on Cochlear Implant users

    Get PDF
    The research project presented in the thesis explores the electrophysiological correlates of linguistic prediction and audio-visual speech processing in deaf people with Cochlear Implant (CI) and people with normal hearing, in order to explore possible group differences. We implement an experimental paradigm in which participants observe audio-visual speech stimuli that vary for predictability of the last word of the sentence (i.e. the target) and visibility of mouth articulatory movements. During the procedure, we record the electroencephalographic signal (EEG) in order to compare the different experimental conditions in terms of neural oscillations and Event Related Potential (ERP) response to the target word. We also administrate linguistic tests to participants in order to relate behavioural performance to the electrophysiological results. The thesis presents a theoretical overview on prediction in language comprehension, the neural correlates of prediction and audio-visual speech integration and previous studies exploring these processes in CI users. Then, it presents the methods used in the experiment and preliminary data from a subgroup of participants with CI

    Perceptual elaboration paradigm (PEP): A new approach for investigating mental representations of language

    Get PDF
    To examine hemispheric differences in accessing a mental representation that embodies perceptual elements and their spatial relationships (i.e., perceptual elaboration and integration), we developed a cross-modal perceptual elaboration paradigm (PEP) in which an imagined percept, rather than a propositional concept, determined congruency. Three target image conditions allow researchers to test which mental representation is primarily accessed when the target is laterally presented. For example, the “Integrated” condition is congruent with either propositional or perceptual mental representations; therefore, results from both hemifield conditions (RVF/LH vs. LVF/RH) should be comparable. Similarly, the “Unrelated” condition is incongruent with either propositional or perceptual mental representations; therefore, results from both hemifield conditions should be comparable as well. However, the “Unintegrated” condition is congruent with the propositional mental representation but not the perceptual mental representation. Should either hemisphere access one representation initially, differences will be revealed in either behavioural or electroencephalography results. This paradigm: • is distinct from existing paired paradigms that emphasize semantic associations. • is important given increasing evidence that discourse comprehension involves accessing perceptual information. • allows researchers to examine the extent to which a mental representation of discourse can embody perceptual elaboration and integration
    corecore