8 research outputs found

    Why pitch sensitivity matters : event-related potential evidence of metric and syntactic violation detection among spanish late learners of german

    Get PDF
    Event-related potential (ERP) data in monolingual German speakers have shown that sentential metric expectancy violations elicit a biphasic ERP pattern consisting of an anterior negativity and a posterior positivity (P600). This pattern is comparable to that elicited by syntactic violations. However, proficient French late learners of German do not detect violations of metric expectancy in German. They also show qualitatively and quantitatively different ERP responses to metric and syntactic violations. We followed up the questions whether (1) latter evidence results from a potential pitch cue insensitivity in speech segmentation in French speakers, or (2) if the result is founded in rhythmic language differences. Therefore, we tested Spanish late learners of German, as Spanish, contrary to French, uses pitch as a segmentation cue even though the basic segmentation unit is the same in French and Spanish (i.e., the syllable). We report ERP responses showing that Spanish L2 learners are sensitive to syntactic as well as metric violations in German sentences independent of attention to task in a P600 response. Overall, the behavioral performance resembles that of German native speakers. The current data suggest that Spanish L2 learners are able to extract metric units (trochee) in their L2 (German) even though their basic segmentation unit in Spanish is the syllable. In addition Spanish in contrast to French L2 learners of German are sensitive to syntactic violations indicating a tight link between syntactic and metric competence. This finding emphasizes the relevant role of metric cues not only in L2 prosodic but also in syntactic processing

    Did you get the beat? Late proficient French-German learners extract strong-weak patterns in tonal but not in linguistic sequences

    No full text
    Event-related potential (ERP) data in French and German have shown that metric violations (i.e. incorrectly stressed words) in a sentence elicit a P600. Furthermore, French speakers find it difficult to discriminate stimuli that vary in word stress position and have been labelled as “stress deaf.” In the current study we investigated (i) whether French late learners of German can perceive deviations of a regular strong–weak stress pattern (trochee) in German sentences, and (ii) whether the same subjects differ in their electrophysiological response from German monolinguals in a non-linguistic “subjective rhythmization” paradigm. Irrespective of the native language both groups show similar results in the latter paradigm in which isochronous stimulus trains are subjectively converted into a binary strong–weak grouped percept (trochee). However, we report differences between native and non-native speakers of German in the sentence paradigm. In contrast to German native speakers French late learners of German fail to show a P600 component in response to deviations from a regular trochaic stress pattern, although attention was directed to the metric pattern of the sentences. The current data suggest that French stress deafness selectively affects the perception of a strong–weak pattern in sentences while strong–weak grouping of non-linguistic sequences is not language specific. The results imply that linguistic and non-linguistic grouping do not rely on the same neural mechanisms

    Did you get the beat? Late proficient French-German learners extract strong-weak patterns in tonal but not in linguistic sequences

    Get PDF
    Event-related potential (ERP) data in French and German have shown that metric violations (i.e. incorrectly stressed words) in a sentence elicit a P600. Furthermore, French speakers find it difficult to discriminate stimuli that vary in word stress position and have been labelled as “stress deaf.” In the current study we investigated (i) whether French late learners of German can perceive deviations of a regular strong–weak stress pattern (trochee) in German sentences, and (ii) whether the same subjects differ in their electrophysiological response from German monolinguals in a non-linguistic “subjective rhythmization” paradigm. Irrespective of the native language both groups show similar results in the latter paradigm in which isochronous stimulus trains are subjectively converted into a binary strong–weak grouped percept (trochee). However, we report differences between native and non-native speakers of German in the sentence paradigm. In contrast to German native speakers French late learners of German fail to show a P600 component in response to deviations from a regular trochaic stress pattern, although attention was directed to the metric pattern of the sentences. The current data suggest that French stress deafness selectively affects the perception of a strong–weak pattern in sentences while strong–weak grouping of non-linguistic sequences is not language specific. The results imply that linguistic and non-linguistic grouping do not rely on the same neural mechanisms

    Language and Speech Rhythmic Abilities Correlate with L2 Prosody Imitation Abilities in Typologically Different Languages

    Get PDF
    International audienceWhile many studies have demonstrated the relationship between musical rhythm and speech prosody, this has been rarely addressed in the context of second language (L2) acquisition. Here, we investigated whether musical rhythmic skills and the production of L2 speech prosody are predictive of one another. We tested both musical and linguistic rhythmic competences of 23 native French speakers of L2 English. Participants completed perception and production music and language tests. In the prosody production test, sentences containing trisyllabic words with either a prominence on the first or on the second syllable were heard and had to be reproduced. Participants were less accurate in reproducing penultimate accent placement. Moreover, the accuracy in reproducing phonologically disfavored stress patterns was best predicted by rhythm production abilities. Our results show, for the first time, that better reproduction of musical rhythmic sequences is predictive of a more successful realization of unfamiliar L2 prosody, specifically in terms of stress-accent placement

    Music, Language, and Rhythmic Timing

    Get PDF
    Neural, perceptual, and cognitive oscillations synchronize with rhythmic events in both speech (Luo & Poeppel, 2007) and music (Snyder & Large, 2005). This synchronization decreases perceptual thresholds to temporally predictable events (Lawrance et al., 2014), improves task performance (Ellis & Jones, 2010), and enables speech intelligibility (Peelle & Davis, 2012). Despite implications of music-language transfer effects for improving language outcomes (Gordon et al., 2015), proposals that shared neural and cognitive resources underlie music and speech rhythm perception (e.g., Tierney & Kraus, 2014) are not yet substantiated. The present research aimed to explore this potential overlap by testing whether music-induced oscillations affect metric speech tempo perception, and vice versa. We presented in each of 432 trials a prime sequence (seven repetitions of either a metric speech utterance or analogous musical phrase) followed by a standard-comparison pair (either two identical speech utterances or two identical musical phrases). Twenty-two participants judged whether the comparison was slower than, faster than, or the same tempo as the standard. We manipulated whether the prime was slower than, faster than, or the same tempo as the standard. Tempo discrimination accuracy was higher when the standard tempo was the same as, compared to slower or faster than, the prime tempo. These findings support the shared-resources view more than the independent-resources view, and they have implications for music-language transfer effects showing improvements in verbal memory (Chan et al., 1998), speech-in-noise perception (Strait et al., 2012), and reading ability in children and adults (Tierney & Kraus, 2013)

    What Pinnipeds Have to Say about Human Speech, Music, and the Evolution of Rhythm

    Get PDF
    Research on the evolution of human speech and music benefits from hypotheses and data generated in a number of disciplines. The purpose of this article is to illustrate the high relevance of pinniped research for the study of speech, musical rhythm, and their origins, bridging and complementing current research on primates and birds. We briefly discuss speech, vocal learning, and rhythm from an evolutionary and comparative perspective. We review the current state of the art on pinniped communication and behavior relevant to the evolution of human speech and music, showing interesting parallels to hypotheses on rhythmic behavior in early hominids. We suggest future research directions in terms of species to test and empirical data needed

    Auditory-Motor Rhythms and Speech Processing in French and German Listeners

    Get PDF
    Moving to a speech rhythm can enhance verbal processing in the listener by increasing temporal expectancies (Falk and Dalla Bella, 2016). Here we tested whether this hypothesis holds for prosodically diverse languages such as German (a lexical stress-language) and French (a non-stress language). Moreover, we examined the relation between motor performance and the benefits for verbal processing as a function of language. Sixty-four participants, 32 German and 32 French native speakers detected subtle word changes in accented positions in metrically structured sentences to which they previously tapped with their index finger. Before each sentence, they were cued by a metronome to tap either congruently (i.e., to accented syllables) or incongruently (i.e., to non-accented parts) to the following speech stimulus. Both French and German speakers detected words better when cued to tap congruently compared to incongruent tapping. Detection performance was predicted by participants' motor performance in the non-verbal cueing phase. Moreover, tapping rate while participants tapped to speech predicted detection differently for the two language groups, in particular in the incongruent tapping condition. We discuss our findings in light of the rhythmic differences of both languages and with respect to recent theories of expectancy-driven and multisensory speech processing

    Treatment of non-fluent aphasia through melody, rhythm and formulaic language

    No full text
    Left-hemisphere stroke patients often suffer a profound loss of spontaneous speech — known as non-fluent aphasia. Yet, many patients are still able to sing entire pieces of text fluently. This striking finding has inspired mainly two research questions. If the experimental design focuses on one point in time (cross section), one may ask whether or not singing facilitates speech production in aphasic patients. If the design focuses on changes over several points in time (longitudinal section), one may ask whether or not singing qualifies as a therapy to aid recovery from aphasia. The present work addresses both of these questions based on two separate experiments. A cross-sectional experiment investigated the relative effects of melody, rhythm, and lyric type on speech production in seventeen patients with non-fluent aphasia. The experiment controlled for vocal frequency variability, pitch accuracy, rhythmicity, syllable duration, phonetic complexity and other influences, such as learning effects and the acoustic setting. Contrary to earlier reports, the cross-sectional results suggest that singing may not benefit speech production in non-fluent aphasic patients over and above rhythmic speech. Previous divergent findings could very likely be due to affects from the acoustic setting, insufficient control for syllable duration, and language-specific stress patterns. However, the data reported here indicate that rhythmic pacing may be crucial, particularly for patients with lesions including the basal ganglia. Overall, basal ganglia lesions accounted for more than fifty percent of the variance related to rhythmicity. The findings suggest that benefits typically attributed to singing in the past may actually have their roots in rhythm. Moreover, the results demonstrate that lyric type may have a profound impact on speech production in non-fluent aphasic patients. Among the studied patients, lyric familiarity and formulaic language appeared to strongly mediate speech production, regardless of whether patients were singing or speaking rhythmically. Lyric familiarity and formulaic language may therefore help to explain effects that have, up until now, been presumed to result from singing. A longitudinal experiment investigated the relative long-term effects of melody and rhythm on the recovery of formulaic and non-formulaic speech. Fifteen patients with chronic non-fluent aphasia underwent either singing therapy, rhythmic therapy, or standard speech therapy. The experiment controlled for vocal frequency variability, phonatory quality, pitch accuracy, syllable duration, phonetic complexity and other influences, such as the acoustic setting and learning effects induced by the testing itself. The longitudinal results suggest that singing and rhythmic speech may be similarly effective in the treatment of non-fluent aphasia. Both singing and rhythmic therapy patients made good progress in the production of common, formulaic phrases — known to be supported by right corticostriatal brain areas. This progress occurred at an early stage of both therapies and was stable over time. Moreover, relatives of the patients reported that they were using a fixed number of formulaic phrases successfully in communicative contexts. Independent of whether patients had received singing or rhythmic therapy, they were able to easily switch between singing and rhythmic speech at any time. Conversely, patients receiving standard speech therapy made less progress in the production of formulaic phrases. They did, however, improve their production of unrehearsed, non-formulaic utterances, in contrast to singing and rhythmic therapy patients, who did not. In light of these results, it may be worth considering the combined use of standard speech therapy and the training of formulaic phrases, whether sung or rhythmically spoken. This combination may yield better results for speech recovery than either therapy alone. Overall, treatment and lyric type accounted for about ninety percent of the variance related to speech recovery in the data reported here. The present work delivers three main results. First, it may not be singing itself that aids speech production and speech recovery in non-fluent aphasic patients, but rhythm and lyric type. Second, the findings may challenge the view that singing causes a transfer of language function from the left to the right hemisphere. Moving beyond this left-right hemisphere dichotomy, the current results are consistent with the idea that rhythmic pacing may partly bypass corticostriatal damage. Third, the data support the claim that non-formulaic utterances and formulaic phrases rely on different neural mechanisms, suggesting a two-path model of speech recovery. Standard speech therapy focusing on non-formulaic, propositional utterances may engage, in particular, left perilesional brain regions, while training of formulaic phrases may open new ways of tapping into right-hemisphere language resources — even without singing
    corecore