7,778 research outputs found

    Composing literacy: Exploring how musical aptitude explains technical reading abilities

    Get PDF
    Several studies have indicated a connection between musical skills and reading-related abilities. However, the underlying reasons to the connection have been unclear. I studied whether subskills within musical aptitude can explain the relationship between music and reading in 8–11-year-old children (N = 66). The children were tested for musical aptitude subskills: pitch discrimination, temporal discrimination, and tonal memory. The focus lay on technical reading abilities, namely performance in reading fluency and sentence comprehension in the Finnish primary school reading test. Linear regression models were used to assess whether the subskills, both together and separately, account for the variance in reading performance. The combination of musical aptitude subskills was related to technical reading abilities. Independently of other subskills, tonal memory explained both reading fluency and sentence comprehension while pitch discrimination explained only reading fluency. The findings support the hypothesis that musical aptitude and reading-related abilities share common mechanisms, such as pitch perception. More extensive research on how musical aptitude and reading are related is needed. Information about the underlying mechanisms in them could be used to create music interventions to support reading acquisition

    Tones in Zhangzhou: Pitch and Beyond

    Get PDF
    This study draws on various approaches—field linguistics; auditory and acoustic phonetics; and statistics—to explore and explain the nature of Zhangzhou tones, an under-described Southern Min variety. Several original findings emerged from the analyses of the data from 21 speakers. The realisations of Zhangzhou tones are multidimensional. The single parameter of pitch/F0 is not sufficient to characterise tonal contrasts in either monosyllabic or polysyllabic settings in Zhangzhou. Instead, various parameters, including pitch/F0, duration, vowel quality, voice quality, and syllable coda type, interact in a complicated but consistent way to code tonal distinctions. Zhangzhou has eight tones rather than seven tones as proposed in previous studies. This finding resulted from examining the realisations of diverse parameters across three different contexts—isolation, phrase-initial, and phrase-final—, rather than classifying tones in citation and in terms of the preservation of Middle Chinese tonal categories. Tonal contrasts in Zhangzhou can be neutralised across different linguistic contexts. Identifying the number of tonal contrasts based simply on tonal realisations in the citation environment is not sufficient. Instead, examining tonal realisations across different linguistic contexts beyond monosyllables is imperative for understanding the nature of tone. Tone sandhi in Zhangzhou is syntactically relevant. The tone sandhi domain is not phonologically determined but rather is aligned with a syntactic phrase XP. Within a given XP, the realisations of the tones at non-phrase-final positions undergo alternation phonologically and phonetically. Nevertheless, the alterations are sensitive only to the phrase boundaries and are not affected by the internal structure of syntactic phrases. Tone sandhi in Zhangzhou is phonologically inert but phonetically sensitive. The realisations of Zhangzhou tones in disyllabic phrases are not categorically affected by their surrounding tones but are phonetically sensitive to surrounding environments. For instance, the pitch/F0 onsets of phrase-final tones are largely sensitive to pitch/F0 offsets of preceding tones and appear to have diverse variants. The mappings between Zhangzhou citation and disyllabic tones are morphologically conditioned. Phrase-initial tones are largely not related to the citation tones at either the phonological or the phonetic level while phrase-final tones are categorically related to the citation tones but phonetically are not quite the same because of predictable sensitivity to surrounding environments. Each tone in Zhangzhou can be regarded as a single morpheme having two alternating allomorphs (tonemes), one for non-phrase-final variants and one for variants in citation and phrase-final contexts, both of which are listed in the mental lexicon of native Zhangzhou speakers but are phonetically distant on the surface. In summary, the realisations of Zhangzhou tones are multidimensional, involving a variety of segmental and suprasegmental parameters. The interactions of Zhangzhou tones are complicated, involving phonetics, phonology, syntax, and morphology. Neutralisation of Zhangzhou tonal contrasts occurs across different contexts, including citation, phrase-final, and non-phrase-final. Thus, researchers must go beyond pitch to understand tone thoroughly as a phenomenon in Southern Min

    Articulation in time : Some word-initial segments in Swedish

    Get PDF
    Speech is both dynamic and distinctive at the same time. This implies a certain contradiction which has entertained researchers in phonetics and phonology for decades. The present dissertation assumes that articulation behaves as a function of time, and that we can find phonological structures in the dynamical systems. EMA is used to measure mechanical movements in Swedish speakers. The results show that tonal context affects articulatory coordination. Acceleration seems to divide the movements of the jaw and lips into intervals of postures and active movements. These intervals are affected differently by the tonal context. Furthermore, a bilabial consonant is shorter if the next consonant is also made with the lips. A hypothesis of a correlation between acoustic segment duration and acceleration is presented. The dissertation highlights the importance of time for how speech ultimately sounds. Particularly significant is the combination of articulatory timing and articulatory duration

    Chinese Tones: Can You Listen With Your Eyes?:The Influence of Visual Information on Auditory Perception of Chinese Tones

    Get PDF
    CHINESE TONES: CAN YOU LISTEN WITH YOUR EYES? The Influence of Visual Information on Auditory Perception of Chinese Tones YUEQIAO HAN Summary Considering the fact that more than half of the languages spoken in the world (60%-70%) are so-called tone languages (Yip, 2002), and tone is notoriously difficult to learn for westerners, this dissertation focused on tone perception in Mandarin Chinese by tone-naïve speakers. Moreover, it has been shown that speech perception is more than just an auditory phenomenon, especially in situations when the speaker’s face is visible. Therefore, the aim of this dissertation is to also study the value of visual information (over and above that of acoustic information) in Mandarin tone perception for tone-naïve perceivers, in combination with other contextual (such as speaking style) and individual factors (such as musical background). Consequently, this dissertation assesses the relative strength of acoustic and visual information in tone perception and tone classification. In the first two empirical and exploratory studies in Chapter 2 and 3 , we set out to investigate to what extent tone-naïve perceivers are able to identify Mandarin Chinese tones in isolated words, and whether or not they can benefit from (seeing) the speakers’ face, and what the contribution is of a hyperarticulated speaking style, and/or their own musical experience. Respectively, in Chapter 2 we investigated the effect of visual cues (comparing audio-only with audio-visual presentations) and speaking style (comparing a natural speaking style with a teaching speaking style) on the perception of Mandarin tones by tone-naïve listeners, looking both at the relative strength of these two factors and their possible interactions; Chapter 3 was concerned with the effects of musicality of the participants (combined with modality) on Mandarin tone perception. In both of these studies, a Mandarin Chinese tone identification experiment was conducted: native speakers of a non-tonal language were asked to distinguish Mandarin Chinese tones based on audio (-only) or video (audio-visual) materials. In order to include variations, the experimental stimuli were recorded using four different speakers in imagined natural and teaching speaking scenarios. The proportion of correct responses (and average reaction times) of the participants were reported. The tone identification experiment presented in Chapter 2 showed that the video conditions (audio-visual natural and audio-visual teaching) resulted in an overall higher accuracy in tone perception than the auditory-only conditions (audio-only natural and audio-only teaching), but no better performance was observed in the audio-visual conditions in terms of reaction time, compared to the auditory-only conditions. Teaching style turned out to make no difference on the speed or accuracy of Mandarin tone perception (as compared to a natural speaking style). Further on, we presented the same experimental materials and procedure in Chapter 3 , but now with musicians and non-musicians as participants. The Goldsmith Musical Sophistication Index (Gold-MSI) was used to assess the musical aptitude of the participants. The data showed that overall, musicians outperformed non-musicians in the tone identification task in both auditory-visual and auditory-only conditions. Both groups identified tones more accurately in the auditory-visual conditions than in the auditory-only conditions. These results provided further evidence for the view that the availability of visual cues along with auditory information is useful for people who have no knowledge of Mandarin Chinese tones when they need to learn to identify these tones. Out of all the musical skills measured by Gold-MSI, the amount of musical training was the only predictor that had an impact on the accuracy of Mandarin tone perception. These findings suggest that learning to perceive Mandarin tones benefits from musical expertise, and visual information can facilitate Mandarin tone identification, but mainly for tone-naïve non-musicians. In addition, performance differed by tone: musicality improves accuracy for every tone; some tones are easier to identify than others: in particular, the identification of tone 3 (a low-falling-rising) proved to be the easiest, while tone 4 (a high-falling tone) was the most difficult to identify for all participants. The results of the first two experiments presented in chapters 2 and 3 showed that adding visual cues to clear auditory information facilitated the tone identification for tone-naïve perceivers (there is a significantly higher accuracy in audio-visual condition(s) than in auditory-only condition(s)). This visual facilitation was unaffected by the presence of (hyperarticulated) speaking style or the musical skill of the participants. Moreover, variations in speakers and tones had effects on the accurate identification of Mandarin tones by tone-naïve perceivers. In Chapter 4 , we compared the relative contribution of auditory and visual information during Mandarin Chinese tone perception. More specifically, we aimed to answer two questions: firstly, whether or not there is audio-visual integration at the tone level (i.e., we explored perceptual fusion between auditory and visual information). Secondly, we studied how visual information affects tone perception for native speakers and non-native (tone-naïve) speakers. To do this, we constructed various tone combinations of congruent (e.g., an auditory tone 1 paired with a visual tone 1, written as AxVx) and incongruent (e.g., an auditory tone 1 paired with a visual tone 2, written as AxVy) auditory-visual materials and presented them to native speakers of Mandarin Chinese and speakers of tone-naïve languages. Accuracy, defined as the percentage correct identification of a tone based on its auditory realization, was reported. When comparing the relative contribution of auditory and visual information during Mandarin Chinese tone perception with congruent and incongruent auditory and visual Chinese material for native speakers of Chinese and non-tonal languages, we found that visual information did not significantly contribute to the tone identification for native speakers of Mandarin Chinese. When there is a discrepancy between visual cues and acoustic information, (native and tone-naïve) participants tend to rely more on the auditory input than on the visual cues. Unlike the native speakers of Mandarin Chinese, tone-naïve participants were significantly influenced by the visual information during their auditory-visual integration, and they identified tones more accurately in congruent stimuli than in incongruent stimuli. In line with our previous work, the tone confusion matrix showed that tone identification varies with individual tones, with tone 3 (the low-dipping tone) being the easiest one to identify, whereas tone 4 (the high-falling tone) was the most difficult one. The results did not show evidence for auditory-visual integration among native participants, while visual information was helpful for tone-naïve participants. However, even for this group, visual information only marginally increased the accuracy in the tone identification task, and this increase depended on the tone in question. Chapter 5 is another chapter that zooms in on the relative strength of auditory and visual information for tone-naïve perceivers, but from the aspect of tone classification. In this chapter, we studied the acoustic and visual features of the tones produced by native speakers of Mandarin Chinese. Computational models based on acoustic features, visual features and acoustic-visual features were constructed to automatically classify Mandarin tones. Moreover, this study examined what perceivers pick up (perception) from what a speaker does (production, facial expression) by studying both production and perception. To be more specific, this chapter set out to answer: (1) which acoustic and visual features of tones produced by native speakers could be used to automatically classify Mandarin tones. Furthermore, (2) whether or not the features used in tone production are similar to or different from the ones that have cue value for tone-naïve perceivers when they categorize tones; and (3) whether and how visual information (i.e., facial expression and facial pose) contributes to the classification of Mandarin tones over and above the information provided by the acoustic signal. To address these questions, the stimuli that had been recorded (and described in chapter 2) and the response data that had been collected (and reported on in chapter 3) were used. Basic acoustic and visual features were extracted. Based on them, we used Random Forest classification to identify the most important acoustic and visual features for classifying the tones. The classifiers were trained on produced tone classification (given a set of auditory and visual features, predict the produced tone) and on perceived/responded tone classification (given a set of features, predict the corresponding tone as identified by the participant). The results showed that acoustic features outperformed visual features for tone classification, both for the classification of the produced and the perceived tone. However, tone-naïve perceivers did revert to the use of visual information in certain cases (when they gave wrong responses). So, visual information does not seem to play a significant role in native speakers’ tone production, but tone-naïve perceivers do sometimes consider visual information in their tone identification. These findings provided additional evidence that auditory information is more important than visual information in Mandarin tone perception and tone classification. Notably, visual features contributed to the participants’ erroneous performance. This suggests that visual information actually misled tone-naïve perceivers in their task of tone identification. To some extent, this is consistent with our claim that visual cues do influence tone perception. In addition, the ranking of the auditory features and visual features in tone perception showed that the factor perceiver (i.e., the participant) was responsible for the largest amount of variance explained in the responses by our tone-naïve participants, indicating the importance of individual differences in tone perception. To sum up, perceivers who do not have tone in their language background tend to make use of visual cues from the speakers’ faces for their perception of unknown tones (Mandarin Chinese in this dissertation), in addition to the auditory information they clearly also use. However, auditory cues are still the primary source they rely on. There is a consistent finding across the studies that the variations between tones, speakers and participants have an effect on the accuracy of tone identification for tone-naïve speaker

    Temporal relation between top-down and bottom-up processing in lexical tone perception

    Get PDF
    Speech perception entails both top-down processing that relies primarily on language experience and bottom-up processing that depends mainly on instant auditory input. Previous models of speech perception often claim that bottom-up processing occurs in an early time window, whereas top-down processing takes place in a late time window after stimulus onset. In this paper, we evaluated the temporal relation of both types of processing in lexical tone perception. We conducted a series of event-related potential (ERP) experiments that recruited Mandarin participants and adopted three experimental paradigms, namely dichotic listening, lexical decision with phonological priming, and semantic violation. By systematically analyzing the lateralization patterns of the early and late ERP components that are observed in these experiments, we discovered that: auditory processing of pitch variations in tones, as a bottom-up effect, elicited greater right hemisphere activation; in contrast, linguistic processing of lexical tones, as a top-down effect, elicited greater left hemisphere activation. We also found that both types of processing co-occurred in both the early (around 200 ms) and late (around 300–500 ms) time windows, which supported a parallel model of lexical tone perception. Unlike the previous view that language processing is special and performed by dedicated neural circuitry, our study have elucidated that language processing can be decomposed into general cognitive functions (e.g., sensory and memory) and share neural resources with these functions.published_or_final_versio

    Explaining the PENTA model: a reply to Arvaniti and Ladd

    Get PDF
    This paper presents an overview of the Parallel Encoding and Target Approximation (PENTA) model of speech prosody, in response to an extensive critique by Arvaniti & Ladd (2009). PENTA is a framework for conceptually and computationally linking communicative meanings to fine-grained prosodic details, based on an articulatory-functional view of speech. Target Approximation simulates the articulatory realisation of underlying pitch targets – the prosodic primitives in the framework. Parallel Encoding provides an operational scheme that enables simultaneous encoding of multiple communicative functions. We also outline how PENTA can be computationally tested with a set of software tools. With the help of one of the tools, we offer a PENTA-based hypothetical account of the Greek intonational patterns reported by Arvaniti & Ladd, showing how it is possible to predict the prosodic shapes of an utterance based on the lexical and postlexical meanings it conveys
    • …
    corecore