9 research outputs found

    Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs

    No full text
    Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception

    Early activity in Broca's area during reading reflects fast access to articulatory codes from print

    Get PDF
    Prior evidence for early activity in Broca's area during reading may reflect fast access to articulatory codes in left inferior frontal gyrus pars opercularis (LIFGpo). We put this hypothesis to test using a benchmark for articulatory involvement in reading known as the masked onset priming effect (MOPE). In masked onset priming, briefly presented pronounceable strings of letters that share an initial phoneme with subsequently presented target words (e.g., gilp-GAME) facilitate word naming responses compared with unrelated primes (dilp-GAME). Crucially, these priming effects only occur when the task requires articulation (naming), and not when it requires lexical decisions. A standard explanation of masked onset priming is that it reflects fast computation of articulatory output codes from letter representations. We therefore predicted 1) that activity in left IFG pars opercularis would be modulated by masked onset priming, 2) that priming-related modulation in LIFGpo would immediately follow activity in occipital cortex, and 3) that this modulation would be greater for naming than for lexical decision. These predictions were confirmed in a magnetoencephalography (MEG) priming study. MOPEs emerged in left IFG at similar to 100 ms posttarget onset, and the priming effects were more sustained when the task involved articulation

    Virale Infektionen

    No full text

    Revisiting old friends: Developments in understanding Histoplasma capsulatum pathogenesis

    No full text
    corecore