26 research outputs found

    Джерельна база досліджень проблем німецького колоніалізму

    Get PDF
    Disfluencies (such as uh and uhm) are a common phenomenon in spontaneous speech. Rather than filtering these hesitations from the incoming speech signal, listeners are sensitive to disfluency and have been shown to actually use disfluencies for speech comprehension. For instance, disfluencies have been found to have beneficial effects on listeners’ memory. Accumulating evidence indicates that attentional mechanisms underlie this disfluency effect: upon encountering disfluency, listeners raise their attention to the incoming speech signal. The experiments reported here investigated whether these beneficial effects of disfluency also hold when listening to a non-native speaker. Recent studies on the perception of non-native disfluency suggest that disfluency effects on prediction are attenuated when listening to a non-native speaker. This attenuation may be a result of listeners being familiar with the frequent and more variant incidence of disfluencies in non-native speech. If listeners also modulate the beneficial effect of disfluency on memory when listening to a non-native speaker, it would indicate a certain amount of control on the part of the listener over how disfluencies affect attention, and thus comprehension. Furthermore, it would argue against the hypothesis that disfluencies affect comprehension in a rather automatic fashion (cf. the Temporal Delay Hypothesis). Using the Change Detection Paradigm, we presented participants with three-sentence passages that sometimes contained a filled pause (e.g., “... that the patient with the uh wound was...”). After each passage, participants saw a transcript of the spoken passage in which one word had been substituted (e.g., “wound” > “injury”). In our first experiment, participants were more accurate in recalling words from previously heard speech (i.e., detecting the change) if these words had been preceded by a disfluency (relative to a fluent passage). Our second experiment - using non-native speech materials - demonstrated that non-native uh’s elicited an effect of the same magnitude and in the same direction: when new participants listened to a non-native speaker producing the same passages, they were also more accurate on disfluent (as compared to fluent) trials. These data suggest that, upon encountering a disfluency, listeners raise their attention levels irrespective of the (non-)native identity of the speaker. Whereas listeners have been found to modulate prediction effects of disfluencies when listening to non-native speech, no such modulation was found for memory effects of disfluencies in the present data, thus potentially constraining the role of listener control in disfluency processing. The current study emphasizes the central role of attention in an account of disfluency processing

    Lexical Tones in Mandarin Chinese Infant-Directed Speech: Age-Related Changes in the Second Year of Life

    No full text
    Tonal information is essential to early word learning in tone languages. Although numerous studies have investigated the intonational and segmental properties of infant-directed speech (IDS), only a few studies have explored the properties of lexical tones in IDS. These studies mostly focused on the first year of life; thus little is known about how lexical tones in IDS change as children’s vocabulary acquisition accelerates in the second year (Goldfield and Reznick, 1990; Bloom, 2001). The present study examines whether Mandarin Chinese mothers hyperarticulate lexical tones in IDS addressing 18- and 24-month-old children—at which age children are learning words at a rapid speed—vs. adult-directed speech (ADS). Thirty-nine Mandarin Chinese–speaking mothers were tested in a semi-spontaneous picture-book-reading task, in which they told the same story to their child (IDS condition) and to an adult (ADS condition). Results for the F0 measurements (minimum F0, maximum F0, and F0 range) of tone in the speech data revealed a continuum of differences among IDS addressing 18-month-olds, IDS addressing 24-month-olds, and ADS. Lexical tones in IDS addressing 18-month-old children had a higher minimum F0, higher maximum F0, and larger pitch range than lexical tones in ADS. Lexical tones in IDS addressing 24-month-old children showed more similarity to ADS tones with respect to pitch height: there were no differences in minimum F0 and maximum F0 between ADS and IDS. However, F0 range was still larger. These results suggest that lexical tones are generally hyperarticulated in Mandarin Chinese IDS addressing 18- and 24- month-old children despite the change in pitch level over time. Mandarin Chinese mothers hyperarticulate lexical tones in IDS when talking to toddlers and potentially facilitate tone acquisition and word learning

    The effect of syntactic complexity on fluency: Comparing actives and passives in L1 and L2 speech

    No full text
    This study investigates how syntactic complexity affects speaking performance in first (L1) and second language (L2) in terms of speaking fluency. Participants (30 Dutch native speakers with an average to advanced level of English) performed two speaking experiments, one in Dutch (L1) and one in English (L2). Syntactic complexity was operationalized by eliciting active and passive sentences in an experimental setting. By comparing the effect of syntactic complexity on different measures of fluency, the results are telling of underlying cognitive processes in on-line speech production. We found that syntactic complexity indeed elicits hesitations, both in the L1 and in the L2. Because producing a rather simple utterance such as an active sentence may already lead to processing difficulty in the L2, the effect of syntactic complexity was found to be larger for L1 speech. Finally, articulation rate was not affected by syntactic complexity, neither in the L1 nor in the L2

    Morphological resonance in the mental lexicon

    No full text

    Both native and non-native disfluencies trigger listeners' attention

    No full text
    Disfluencies (such as uh and uhm) are a common phenomenon in spontaneous speech. Rather than filtering these hesitations from the incoming speech signal, listeners are sensitive to disfluency and have been shown to actually use disfluencies for speech comprehension. For instance, disfluencies have been found to have beneficial effects on listeners’ memory. Accumulating evidence indicates that attentional mechanisms underlie this disfluency effect: upon encountering disfluency, listeners raise their attention to the incoming speech signal. The experiments reported here investigated whether these beneficial effects of disfluency also hold when listening to a non-native speaker. Recent studies on the perception of non-native disfluency suggest that disfluency effects on prediction are attenuated when listening to a non-native speaker. This attenuation may be a result of listeners being familiar with the frequent and more variant incidence of disfluencies in non-native speech. If listeners also modulate the beneficial effect of disfluency on memory when listening to a non-native speaker, it would indicate a certain amount of control on the part of the listener over how disfluencies affect attention, and thus comprehension. Furthermore, it would argue against the hypothesis that disfluencies affect comprehension in a rather automatic fashion (cf. the Temporal Delay Hypothesis). Using the Change Detection Paradigm, we presented participants with three-sentence passages that sometimes contained a filled pause (e.g., “... that the patient with the uh wound was...”). After each passage, participants saw a transcript of the spoken passage in which one word had been substituted (e.g., “wound” > “injury”). In our first experiment, participants were more accurate in recalling words from previously heard speech (i.e., detecting the change) if these words had been preceded by a disfluency (relative to a fluent passage). Our second experiment - using non-native speech materials - demonstrated that non-native uh’s elicited an effect of the same magnitude and in the same direction: when new participants listened to a non-native speaker producing the same passages, they were also more accurate on disfluent (as compared to fluent) trials. These data suggest that, upon encountering a disfluency, listeners raise their attention levels irrespective of the (non-)native identity of the speaker. Whereas listeners have been found to modulate prediction effects of disfluencies when listening to non-native speech, no such modulation was found for memory effects of disfluencies in the present data, thus potentially constraining the role of listener control in disfluency processing. The current study emphasizes the central role of attention in an account of disfluency processing

    Table_1.docx

    No full text
    <p>Tonal information is essential to early word learning in tone languages. Although numerous studies have investigated the intonational and segmental properties of infant-directed speech (IDS), only a few studies have explored the properties of lexical tones in IDS. These studies mostly focused on the first year of life; thus little is known about how lexical tones in IDS change as children’s vocabulary acquisition accelerates in the second year (Goldfield and Reznick, 1990; Bloom, 2001). The present study examines whether Mandarin Chinese mothers hyperarticulate lexical tones in IDS addressing 18- and 24-month-old children—at which age children are learning words at a rapid speed—vs. adult-directed speech (ADS). Thirty-nine Mandarin Chinese–speaking mothers were tested in a semi-spontaneous picture-book-reading task, in which they told the same story to their child (IDS condition) and to an adult (ADS condition). Results for the F0 measurements (minimum F0, maximum F0, and F0 range) of tone in the speech data revealed a continuum of differences among IDS addressing 18-month-olds, IDS addressing 24-month-olds, and ADS. Lexical tones in IDS addressing 18-month-old children had a higher minimum F0, higher maximum F0, and larger pitch range than lexical tones in ADS. Lexical tones in IDS addressing 24-month-old children showed more similarity to ADS tones with respect to pitch height: there were no differences in minimum F0 and maximum F0 between ADS and IDS. However, F0 range was still larger. These results suggest that lexical tones are generally hyperarticulated in Mandarin Chinese IDS addressing 18- and 24- month-old children despite the change in pitch level over time. Mandarin Chinese mothers hyperarticulate lexical tones in IDS when talking to toddlers and potentially facilitate tone acquisition and word learning.</p

    Measuring L2 speakers’ interactional ability using interactive speech tasks

    No full text
    This article explores ways to assess interactional performance, and reports on the use of a test format that standardizes the interlocutor?s linguistic and interactional contributions to the exchange. It describes the construction and administration of six scripted speech tasks (instruction, advice, and sales tasks) with pre-vocational learners (n?=?34), and reports on the extent to which these tasks can be used to assess L2 speakers? interactional performance in a reliable and valid manner.The high levels of agreement found between three independent raters on both holistic and analytical measurements of interactional performance indicate that this construct can be measured reliably with these tasks. Means and standard deviations demonstrate that tasks differentiate between speakers? interactional performance. Holistic ratings of linguistic accuracy and interactional ability correlate highly between tasks that focus on different language functions, and are situated in different interactional domains. Furthermore, positive correlations are found between both holistic and analytic ratings of oral performance and vocabulary size. Positive within-task correlations between analytical ratings of specific interactional strategies and holistic ratings of overall interactional ability show that analytic ratings of meaning negotiation and correcting misinterpretation provide additional information about speakers? interactional ability that is not captured by holistic assessment alone.It is concluded that these tasks are a useful diagnostic tool for practitioners to support their learners? interactional abilities at a sub-skill level

    Measuring L2 speakers’ interactional ability using interactive speech tasks

    Get PDF
    This article explores ways to assess interactional performance, and reports on the use of a test format that standardizes the interlocutor?s linguistic and interactional contributions to the exchange. It describes the construction and administration of six scripted speech tasks (instruction, advice, and sales tasks) with pre-vocational learners (n?=?34), and reports on the extent to which these tasks can be used to assess L2 speakers? interactional performance in a reliable and valid manner.The high levels of agreement found between three independent raters on both holistic and analytical measurements of interactional performance indicate that this construct can be measured reliably with these tasks. Means and standard deviations demonstrate that tasks differentiate between speakers? interactional performance. Holistic ratings of linguistic accuracy and interactional ability correlate highly between tasks that focus on different language functions, and are situated in different interactional domains. Furthermore, positive correlations are found between both holistic and analytic ratings of oral performance and vocabulary size. Positive within-task correlations between analytical ratings of specific interactional strategies and holistic ratings of overall interactional ability show that analytic ratings of meaning negotiation and correcting misinterpretation provide additional information about speakers? interactional ability that is not captured by holistic assessment alone.It is concluded that these tasks are a useful diagnostic tool for practitioners to support their learners? interactional abilities at a sub-skill level
    corecore