65 research outputs found

    How does cognitive load influence speech perception? : An encoding hypothesis

    Get PDF
    Two experiments investigated the conditions under which cognitive load exerts an effect on speech perception. These experiments extend earlier research by using a different speech perception task (four-interval oddity task) and by implementing cognitive load through a task often thought to be modular, namely, face processing. In the cognitive-load conditions, participants were required to remember two faces presented before the speech stimuli. In Experiment 1, performance in the speech-perception task under cognitive load was not impaired in comparison to a no-load baseline condition. In Experiment 2, we modified the load condition minimally such that it required encoding of the two faces simultaneously with the speech stimuli. As a reference condition, we also used a visual search task that in earlier experiments had led to poorer speech perception. Both concurrent tasks led to decrements in the speech task. The results suggest that speech perception is affected even by loads thought to be processed modularly, and that, critically, encoding in working memory might be the locus of interference

    Beneficial effects of word final stress in segmenting a new language: evidence from ERPs

    Get PDF
    Background: How do listeners manage to recognize words in an unfamiliar language? The physical continuity of the signal, in which real silent pauses between words are lacking, makes it a difficult task. However, there are multiple cues that can be exploited to localize word boundaries and to segment the acoustic signal. In the present study, word-stress was manipulated with statistical information and placed in different syllables within trisyllabic nonsense words to explore the result of the combination of the cues in an online word segmentation task. Results: The behavioral results showed that words were segmented better when stress was placed on the final syllables than when it was placed on the middle or first syllable. The electrophysiological results showed an increase in the amplitude of the P2 component, which seemed to be sensitive to word-stress and its location within words. Conclusion: The results demonstrated that listeners can integrate specific prosodic and distributional cues when segmenting speech. An ERP component related to word-stress cues was identified: stressed syllables elicited larger amplitudes in the P2 component than unstressed ones

    Calibrating rhythm: First language and second language studies

    No full text

    Rhythmic typology and variation in first and second languages

    No full text

    Segmentation cues in spontaneous speech: Robust semantics and fragile phonotactics.

    No full text
    Multiple cues influence listeners’ segmentation of connected speech into words, but most previous studies have used stimuli elicited in careful readings rather than natural conversation. Discerning word boundaries in conversational speech may differ from the laboratory setting. In particular, a speaker’s articulatory effort – hyperarticulation vs. hypoarticulation (H&H) – may vary according to communicative demands, suggesting a compensatory relationship whereby acoustic-phonetic cues are attenuated when other information sources strongly guide segmentation.We examined how listeners’ interpretation of segmentation cues is affected by speech style (spontaneous conversation vs. read), using cross-modal identity priming. To elicit spontaneous stimuli, we used a map task in which speakers discussed routes around stylized landmarks. These landmarks were two-word phrases in which the strength of potential segmentation cues – semantic likelihood and crossboundary diphone phonotactics – was systematically varied. Landmark-carrying utterances were transcribed and later re-recorded as read speech. Independent of speech style, we found an interaction between cue valence (favorable/unfavorable) and cue type (phonotactics/semantics). Thus, there was an effect of semantic plausibility, but no effect of cross-boundary phonotactics, indicating that the importance of phonotactic segmentation may have been overstated in studies where lexical information was artificially suppressed. These patterns were unaffected by whether the stimuli were elicited in a spontaneous or read context, even though the difference in speech styleswas evident in a main effect. Durational analyses suggested speaker-driven cue trade-offs congruent with an H&H account, but these modulations did not impact on listener behavior. We conclude that previous research exploiting read speech is reliable in indicating the primacy of lexically based cues in the segmentation of natural conversational speech

    Rhythmic and prosodic contrast in Venetan and Sicilian Italian

    No full text
    We compared the Italian of speakers from the Veneto, in the north of Italy, and from Sicily, in the far south, looking for evidence of rhythmic and prosodic differences. We found no reliable differences in scores for rhythm metrics (VarcoV, %V, VarcoC) for Venetan and Sicilian, with both varieties having scores similar to French and indicative of a greater durational marking of stress than Spanish. However, we found much stronger prosodic timing effects in Sicilian Italian, with stressed vowels in nuclear utterance-final position twice as long as in prenuclear utterance-medial position. We also found evidence of differential patterns of vowel reduction: Sicilian showed greater modulation of F1 and F2 values according to stress and prosodic position, indicating greater vowel centralisation in prosodically-weak contexts than in Venetan Italian. Overall, the results indicated greater prosodic contrast in southern Italian, and suggest that multiple factors contribute to the perception of rhythmic differences
    • …
    corecore