1,404 research outputs found

    The Influence of the Mother Tongue and of Musical Experience on Rhythm Perception

    Get PDF
    Native language and musical experience are both said to influence our perception of rhythm; however, the study of the influence of native language on rhythm perception is limited. This thesis tested if and how linguistic and musical experiences affect our rhythm perception. The term rhythm, as used here, is identical to the musical term, metre, which refers to a recurring regular pattern of prominent and non-prominent elements. First, this thesis examined language-specific rhythms in English, Japanese, and Russian to explore whether listeners are better at detecting irregularities in rhythms that frequently occur in their native language, compared to those that are less frequent. A review of the existing literature and an original, corpus-based examination show that English and Russian rhythms are based on a relatively regular alternation of prominent and non-prominent syllables, whereas Japanese rhythm is based on a subtle alternation of prominent and non-prominent morae, less regular than that of English and Russian rhythms. Similarly, culture-specific musical rhythms are discussed to examine the influence of musical experience on rhythm perception. It is shown that, in traditional Japanese and Russian musical works, non-binary rhythms are prevalent, while they are relatively rare in English music. A series of perceptual experiments with both English, Japanese, and Russian-speaking musicians and non-musicians showed that musical experience affects rhythm perception but is less effective than linguistic experience in shaping responses to rhythm irregularities. These perception experiments showed that Japanese speakers perceived binary and non-binary rhythms more accurately than English and Russian speakers, while there were no significant differences between English and Russian speakers. In addition, it was found that clashes (rhythm irregularities caused by successive prominent elements) were less tolerated than lapses (rhythm irregularities caused by sequences of non-prominent elements). The experimental results showed that all participants tolerated lapses more readily than clashes, which suggests that clashes lead to dysrhythmic sequences that are easier to detect than those of lapses

    The Connectivity of Musical Aptitude and Foreign Language Learning Skills: Neural and Behavioural Evidence

    Get PDF
    Given the structural and acoustical similarities between speech and music, and possible overlapping cerebral structures in speech and music processing, a possible relationship between musical aptitude and linguistic abilities, especially in terms of second language pronunciation skills, was investigated. Moreover, the laterality effect of the mother tongue was examined with both adults and children by means of dichotic listening scores. Finally, two event-related potential studies sought to reveal whether children with advanced second language pronunciation skills and higher general musical aptitude differed from children with less-advanced pronunciation skills and less musical aptitude in accuracy when preattentively processing mistuned triads and music / speech sound durations. The results showed a significant relationship between musical aptitude, English language pronunciation skills, chord discrimination ability, and sound-change-evoked brain activation in response to musical stimuli (durational differences and triad contrasts). Regular music practice may also have a modulatory effect on the brain’s linguistic organization and cause altered hemispheric functioning in those who have regularly practised music for years. Based on the present results, it is proposed that language skills, both in production and discrimination, are interconnected with perceptual musical skills.Siirretty Doriast

    Chinese Tones: Can You Listen With Your Eyes?:The Influence of Visual Information on Auditory Perception of Chinese Tones

    Get PDF
    CHINESE TONES: CAN YOU LISTEN WITH YOUR EYES? The Influence of Visual Information on Auditory Perception of Chinese Tones YUEQIAO HAN Summary Considering the fact that more than half of the languages spoken in the world (60%-70%) are so-called tone languages (Yip, 2002), and tone is notoriously difficult to learn for westerners, this dissertation focused on tone perception in Mandarin Chinese by tone-naïve speakers. Moreover, it has been shown that speech perception is more than just an auditory phenomenon, especially in situations when the speaker’s face is visible. Therefore, the aim of this dissertation is to also study the value of visual information (over and above that of acoustic information) in Mandarin tone perception for tone-naïve perceivers, in combination with other contextual (such as speaking style) and individual factors (such as musical background). Consequently, this dissertation assesses the relative strength of acoustic and visual information in tone perception and tone classification. In the first two empirical and exploratory studies in Chapter 2 and 3 , we set out to investigate to what extent tone-naïve perceivers are able to identify Mandarin Chinese tones in isolated words, and whether or not they can benefit from (seeing) the speakers’ face, and what the contribution is of a hyperarticulated speaking style, and/or their own musical experience. Respectively, in Chapter 2 we investigated the effect of visual cues (comparing audio-only with audio-visual presentations) and speaking style (comparing a natural speaking style with a teaching speaking style) on the perception of Mandarin tones by tone-naïve listeners, looking both at the relative strength of these two factors and their possible interactions; Chapter 3 was concerned with the effects of musicality of the participants (combined with modality) on Mandarin tone perception. In both of these studies, a Mandarin Chinese tone identification experiment was conducted: native speakers of a non-tonal language were asked to distinguish Mandarin Chinese tones based on audio (-only) or video (audio-visual) materials. In order to include variations, the experimental stimuli were recorded using four different speakers in imagined natural and teaching speaking scenarios. The proportion of correct responses (and average reaction times) of the participants were reported. The tone identification experiment presented in Chapter 2 showed that the video conditions (audio-visual natural and audio-visual teaching) resulted in an overall higher accuracy in tone perception than the auditory-only conditions (audio-only natural and audio-only teaching), but no better performance was observed in the audio-visual conditions in terms of reaction time, compared to the auditory-only conditions. Teaching style turned out to make no difference on the speed or accuracy of Mandarin tone perception (as compared to a natural speaking style). Further on, we presented the same experimental materials and procedure in Chapter 3 , but now with musicians and non-musicians as participants. The Goldsmith Musical Sophistication Index (Gold-MSI) was used to assess the musical aptitude of the participants. The data showed that overall, musicians outperformed non-musicians in the tone identification task in both auditory-visual and auditory-only conditions. Both groups identified tones more accurately in the auditory-visual conditions than in the auditory-only conditions. These results provided further evidence for the view that the availability of visual cues along with auditory information is useful for people who have no knowledge of Mandarin Chinese tones when they need to learn to identify these tones. Out of all the musical skills measured by Gold-MSI, the amount of musical training was the only predictor that had an impact on the accuracy of Mandarin tone perception. These findings suggest that learning to perceive Mandarin tones benefits from musical expertise, and visual information can facilitate Mandarin tone identification, but mainly for tone-naïve non-musicians. In addition, performance differed by tone: musicality improves accuracy for every tone; some tones are easier to identify than others: in particular, the identification of tone 3 (a low-falling-rising) proved to be the easiest, while tone 4 (a high-falling tone) was the most difficult to identify for all participants. The results of the first two experiments presented in chapters 2 and 3 showed that adding visual cues to clear auditory information facilitated the tone identification for tone-naïve perceivers (there is a significantly higher accuracy in audio-visual condition(s) than in auditory-only condition(s)). This visual facilitation was unaffected by the presence of (hyperarticulated) speaking style or the musical skill of the participants. Moreover, variations in speakers and tones had effects on the accurate identification of Mandarin tones by tone-naïve perceivers. In Chapter 4 , we compared the relative contribution of auditory and visual information during Mandarin Chinese tone perception. More specifically, we aimed to answer two questions: firstly, whether or not there is audio-visual integration at the tone level (i.e., we explored perceptual fusion between auditory and visual information). Secondly, we studied how visual information affects tone perception for native speakers and non-native (tone-naïve) speakers. To do this, we constructed various tone combinations of congruent (e.g., an auditory tone 1 paired with a visual tone 1, written as AxVx) and incongruent (e.g., an auditory tone 1 paired with a visual tone 2, written as AxVy) auditory-visual materials and presented them to native speakers of Mandarin Chinese and speakers of tone-naïve languages. Accuracy, defined as the percentage correct identification of a tone based on its auditory realization, was reported. When comparing the relative contribution of auditory and visual information during Mandarin Chinese tone perception with congruent and incongruent auditory and visual Chinese material for native speakers of Chinese and non-tonal languages, we found that visual information did not significantly contribute to the tone identification for native speakers of Mandarin Chinese. When there is a discrepancy between visual cues and acoustic information, (native and tone-naïve) participants tend to rely more on the auditory input than on the visual cues. Unlike the native speakers of Mandarin Chinese, tone-naïve participants were significantly influenced by the visual information during their auditory-visual integration, and they identified tones more accurately in congruent stimuli than in incongruent stimuli. In line with our previous work, the tone confusion matrix showed that tone identification varies with individual tones, with tone 3 (the low-dipping tone) being the easiest one to identify, whereas tone 4 (the high-falling tone) was the most difficult one. The results did not show evidence for auditory-visual integration among native participants, while visual information was helpful for tone-naïve participants. However, even for this group, visual information only marginally increased the accuracy in the tone identification task, and this increase depended on the tone in question. Chapter 5 is another chapter that zooms in on the relative strength of auditory and visual information for tone-naïve perceivers, but from the aspect of tone classification. In this chapter, we studied the acoustic and visual features of the tones produced by native speakers of Mandarin Chinese. Computational models based on acoustic features, visual features and acoustic-visual features were constructed to automatically classify Mandarin tones. Moreover, this study examined what perceivers pick up (perception) from what a speaker does (production, facial expression) by studying both production and perception. To be more specific, this chapter set out to answer: (1) which acoustic and visual features of tones produced by native speakers could be used to automatically classify Mandarin tones. Furthermore, (2) whether or not the features used in tone production are similar to or different from the ones that have cue value for tone-naïve perceivers when they categorize tones; and (3) whether and how visual information (i.e., facial expression and facial pose) contributes to the classification of Mandarin tones over and above the information provided by the acoustic signal. To address these questions, the stimuli that had been recorded (and described in chapter 2) and the response data that had been collected (and reported on in chapter 3) were used. Basic acoustic and visual features were extracted. Based on them, we used Random Forest classification to identify the most important acoustic and visual features for classifying the tones. The classifiers were trained on produced tone classification (given a set of auditory and visual features, predict the produced tone) and on perceived/responded tone classification (given a set of features, predict the corresponding tone as identified by the participant). The results showed that acoustic features outperformed visual features for tone classification, both for the classification of the produced and the perceived tone. However, tone-naïve perceivers did revert to the use of visual information in certain cases (when they gave wrong responses). So, visual information does not seem to play a significant role in native speakers’ tone production, but tone-naïve perceivers do sometimes consider visual information in their tone identification. These findings provided additional evidence that auditory information is more important than visual information in Mandarin tone perception and tone classification. Notably, visual features contributed to the participants’ erroneous performance. This suggests that visual information actually misled tone-naïve perceivers in their task of tone identification. To some extent, this is consistent with our claim that visual cues do influence tone perception. In addition, the ranking of the auditory features and visual features in tone perception showed that the factor perceiver (i.e., the participant) was responsible for the largest amount of variance explained in the responses by our tone-naïve participants, indicating the importance of individual differences in tone perception. To sum up, perceivers who do not have tone in their language background tend to make use of visual cues from the speakers’ faces for their perception of unknown tones (Mandarin Chinese in this dissertation), in addition to the auditory information they clearly also use. However, auditory cues are still the primary source they rely on. There is a consistent finding across the studies that the variations between tones, speakers and participants have an effect on the accuracy of tone identification for tone-naïve speaker

    Musical Sophistication and the Effect of Complexity on Auditory Discrimination in Finnish Speakers

    Get PDF
    Musical experiences and native language are both known to affect auditory processing. The present work aims to disentangle the influences of native language phonology and musicality on behavioral and subcortical sound feature processing in a population of musically diverse Finnish speakers as well as to investigate the specificity of enhancement from musical training. Finnish speakers are highly sensitive to duration cues since in Finnish, vowel and consonant duration determine word meaning. Using a correlational approach with a set of behavioral sound feature discrimination tasks, brainstem recordings, and a musical sophistication questionnaire, we find no evidence for an association between musical sophistication and more precise duration processing in Finnish speakers either in the auditory brainstem response or in behavioral tasks, but they do show an enhanced pitch discrimination compared to Finnish speakers with less musical experience and show greater duration modulation in a complex task. These results are consistent with a ceiling effect set for certain sound features which corresponds to the phonology of the native language, leaving an opportunity for music experience-based enhancement of sound features not explicitly encoded in the language (such as pitch, which is not explicitly encoded in Finnish). Finally, the pattern of duration modulation in more musically sophisticated Finnish speakers suggests integrated feature processing for greater efficiency in a real world musical situation. These results have implications for research into the specificity of plasticity in the auditory system as well as to the effects of interaction of specific language features with musical experiences.Peer reviewe

    Investigating the effect of long-term musical experience on the auditory processing skills of young Maltese adults

    Get PDF
    Learning and practising a musical instrument has recently been thought to ‘train’ the brain into processing sound in a more refined manner.As a result, musicians experiencing consistent exposure to musical practice have been suspected to have superior auditory processing skills. This study aimed to investigate this phenomenon within the Maltese context, by testing two cohorts of young Maltese adults. Participants in the musician cohort experienced consistent musical training throughout their lifetime, while those in the non-musician cohort did not have a history of musical training. A total of 24 Maltese speakers (14 musicians and 10 non-musicians) of ages ranging between 19 and 31 years were tested for Frequency Discrimination (FD), Duration Discrimination (DD), Temporal Resolution (TR) and speech-in-noise recognition. The main outcomes yielded by each cohort were compared and analysed statistically. In comparison to the non-musician cohort, the musicians performed in a slightly better manner throughout testing. Statistical superiority was surprisingly only present in the FD test. Although musicians displayed a degree of superiority in performance on the other tests, differences in mean scores were not statistically significant. The results yielded by this investigation are to a degree coherent with implications of previous research, in that the effect of long-term musical experience on the trained cohort manifested itself in a slight superiority in performance on auditory processing tasks. However, this difference in scoring was not prominent enough to be statistically significant.peer-reviewe

    Behavioral and subcortical signatures of musical expertise in Mandarin Chinese speakers

    Get PDF
    Both musical training and native language have been shown to have experience-based plastic effects on auditory processing. However, the combined effects within individuals are unclear. Recent research suggests that musical training and tone language speaking are not clearly additive in their effects on processing of auditory features and that there may be a disconnect between perceptual and neural signatures of auditory feature processing. The literature has only recently begun to investigate the effects of musical expertise on basic auditory processing for different linguistic groups. This work provides a profile of primary auditory feature discrimination for Mandarin speaking musicians and nonmusicians. The musicians showed enhanced perceptual discrimination for both frequency and duration as well as enhanced duration discrimination in a multifeature discrimination task, compared to nonmusicians. However, there were no differences between the groups in duration processing of nonspeech sounds at a subcortical level or in subcortical frequency representation of a nonnative tone contour, for f(o) or for the first or second formant region. The results indicate that musical expertise provides a cognitive, but not subcortical, advantage in a population of Mandarin speakers.Peer reviewe

    Evaluation of the perceptual magnet effect and categorical perception for musical timbre

    Full text link
    Recent auditory research has raised fundamental questions about the perceptual magnet effect (PME), where discrimination performance is poorer for stimuli that approach best exemplars of a phonetic category. It has been suggested that the effect reflects inter-categorical comparisons, and might not generalize to nonspeech. Three experiments addressed these concerns. In Experiment 1 prototype and non-prototype stimuli were determined from goodness ratings of synthesized (violin) timbres varying in center frequencies of F1 and F2 formants. Experiment 2 evaluated for a PME using discrimination data, and influences from other categories by comparing goodness ratings from stimuli in prototype, non-prototype, and no context conditions. Experiment 3 used labeling and discrimination tasks to assess if categorical perception occurs with timbres. Despite having stimuli reliably identified to be within the intended category, no PME was found. It is suggested that the PME, if a real phenomenon, is too difficult to tease apart from categorization tendencies

    Musical experience may help the brain respond to second language reading

    Get PDF
    A person's native language background exerts constraints on the brain's automatic responses while learning a second language. It remains unclear, however, whether and how musical experience may help the brain overcome such constraints and meet the requirements of a second language. This study compared native Chinese English learners who were musicians, non-musicians and native English readers on their automatic brain automatic integration of English letter-sounds with an ERP cross-modal audiovisual mismatch negativity paradigm. The results showed that native Chinese-speaking musicians successfully integrated English letters and sounds, but their non-musician peers did not, despite of their comparable English learning experience and proficiency level. However, native Chinese-speaking musicians demonstrated enhanced cross-modal MMN for both synchronized and delayed letter-sound integration, while native English readers only showed enhanced cross-modal MMN for synchronized integration. Moreover, native Chinese-speaking musicians showed stronger theta oscillations when integrating English letters and sounds, suggesting that they had better top-down modulation. In contrast, native English readers showed stronger delta oscillations for synchronized integration, and their cross-modal delta oscillations significantly correlated with English reading performance. These findings suggest that long-term professional musical experience may enhance the top-down modulation, then help the brain efficiently integrating letter-sounds required by the second language. Such benefits from musical experience may be different from those from specific language experience in shaping the brain's automatic responses to reading.Peer reviewe

    Better than native: Tone language experience enhances English lexical stress discrimination in Cantonese-English bilingual listeners

    Get PDF
    Available online 13 April 2019While many second language (L2) listeners are known to struggle when discriminating non-native features absent in their first language (L1), no study has reported that L2 listeners perform better than native listeners in this regard. The present study tested whether Cantonese-English bilinguals were better in discriminating English lexical stress in individual words or pseudowords than native English listeners, even though lexical stress is absent in Cantonese. In experiments manipulating acoustic, phonotactic, and lexical cues, Cantonese-English bilingual adults exhibited superior performance in discriminating English lexical stress than native English listeners across all phonotactic/lexical conditions when the fundamental frequency (f0) cue to lexical stress was present. The findings underscore the facilitative effect of Cantonese tone language experience on English lexical stress discrimination.This article is, in part, based on the fourth chapter of the PhD thesis submitted by William Choi to The University of Hong Kong. This research was supported, in part, by the Language Learning Dissertation Grant from Language Learning to William Choi. It was also supported by the Pilot Scheme on International Experiences for Research Postgraduate Students from The University of Hong Kong to William Choi, and the Early Career Scheme (27402514), General Research Fund (17673216), and General Research Fund (17609518) from the HKSAR Research Grant Council to Xiuli Tong. Support was also provided by Ministerio de Ciencia E Innovacion, Grant PSI2014-53277, Centro de Excelencia Severo Ochoa, Grant SEV-2015-0490, and by the National Science Foundation under Grant IBSS-1519908 to Arthur Samuel. We thank Benjamin Munson for his useful suggestion about the syllable-timed nature of Cantonese and the four anonymous reviewers for comments that have helped us to develop our ideas and presentation more clearly
    corecore