5,937 research outputs found
Deconstructing comprehensibility: identifying the linguistic influences on listeners' L2 comprehensibility ratings
Comprehensibility, a major concept in second language (L2) pronunciation research that denotes listenersā perceptions of how easily they understand L2 speech, is central to interlocutorsā communicative success in real-world contexts. Although comprehensibility has been modeled in several L2 oral proficiency scalesāfor example, the Test of English as a Foreign Language (TOEFL) or the International English Language Testing System (IELTS)āshortcomings of existing scales (e.g., vague descriptors) reflect limited empirical evidence as to which linguistic aspects influence listenersā judgments of L2 comprehensibility at different ability levels. To address this gap, a mixed-methods approach was used in the present study to gain a deeper understanding of the linguistic aspects underlying listenersā L2 comprehensibility ratings. First, speech samples of 40 native French learners of English were analyzed using 19 quantitative speech measures, including segmental, suprasegmental, fluency, lexical, grammatical, and discourse-level variables. These measures were then correlated with 60 native English listenersā scalar judgments of the speakersā comprehensibility. Next, three English as a second language (ESL) teachers provided introspective reports on the linguistic aspects of speech that they attended to when judging L2 comprehensibility. Following data triangulation, five speech measures were identified that clearly distinguished between L2 learners at different comprehensibility levels. Lexical richness and fluency measures differentiated between low-level learners; grammatical and discourse-level measures differentiated between high-level learners; and word stress errors discriminated between learners of all levels
Speech intelligibility and prosody production in children with cochlear implants
ObjectivesāThe purpose of the current study was to examine the relation between speech intelligibility and prosody production in children who use cochlear implants. MethodsāThe Beginner\u27s Intelligibility Test (BIT) and Prosodic Utterance Production (PUP) task were administered to 15 children who use cochlear implants and 10 children with normal hearing. Adult listeners with normal hearing judged the intelligibility of the words in the BIT sentences, identified the PUP sentences as one of four grammatical or emotional moods (i.e., declarative, interrogative, happy, or sad), and rated the PUP sentences according to how well they thought the child conveyed the designated mood. ResultsāPercent correct scores were higher for intelligibility than for prosody and higher for children with normal hearing than for children with cochlear implants. Declarative sentences were most readily identified and received the highest ratings by adult listeners; interrogative sentences were least readily identified and received the lowest ratings. Correlations between intelligibility and all mood identification and rating scores except declarative were not significant. DiscussionāThe findings suggest that the development of speech intelligibility progresses ahead of prosody in both children with cochlear implants and children with normal hearing; however, children with normal hearing still perform better than children with cochlear implants on measures of intelligibility and prosody even after accounting for hearing age. Problems with interrogative intonation may be related to more general restrictions on rising intonation, and th
Recommended from our members
An exploratory study of foreign accent and phonological awareness in Korean learners of English
Communication in a second or multiple languages has become essential in the globalized world. However, acquiring a second language (L2) after a critical period is universally acknowledged to be challenging (Lenneberg, 1967). Late learners hardly reach a nativelike level in L2, particularly in its pronunciation, and their incomplete phonological acquisition is manifested by a foreign accentāa common and persistent feature of otherwise fluent L2 speech. Although foreign-accented speech is widespread, it has been a target of social constraints in L2-speaking communities, causing many learners and instructors to seek out ways to reduce foreign accents. Accordingly, research in L2 speech has unceasingly examined various learner-external and learner-internal factors of the occurrence of foreign accents as well as nonnative speech characteristics underlying the judgment of the degree of foreign accents. The current study aimed to expand the understanding of the characteristics and judgments of foreign accents by investigating phonological awareness, a construct pertinent to learnersā phonological knowledge, which has received little attention in research on foreign accents.
The current study was exploratory and non-experimental research that targeted 40 adults with Korean-accented English living in the United States. The study first examined how 23 raters speaking American English as their native language detect, perceive, describe, and rate Korean-accented English. Through qualitative and quantitative analyses of the accent perception data, the study identified various phonological and phonetic deviations from the nativelike sounds, which largely result from the influence of first language (Korean) on L2 (English). The study then probed the relationship between foreign accents and learnersā awareness of the phonological system of L2, which was measured using production, perception, and verbalization tasks that tapped into the knowledge of L2 phonology. The study found a significant inverse relationship between the degree of a foreign accent and phonological awareness, particularly implicit knowledge of L2 segmentals. Further in-depth analyses revealed that explicit knowledge of L2 phonology alone was not sufficient for targetlike pronunciation. Findings suggest that L2 speakers experience varying degrees of difficulty in perceiving and producing different L2 segmentals, possibly resulting in foreign-accented speech
Comb or coat: The role of intonation in online reference resolution in a second language
1 Introduction In spoken sentence processing, listeners do not wait till the end of a sentence to decipher what message is conveyed. Rather, they make predictions on the most plausible interpretation at every possible point in the auditory signal on the basis of all kinds of linguistic information (e.g., Eberhard et al. 1995; Alman and Kamide 1999, 2007). Intonation is one such kind of linguistic information that is efficiently used in spoken sentence processing. The evidence comes primarily from recent work on online reference resolution conducted in the visual-world eyetracking paradigm (e.g., Tanenhaus et al. 1995). In this paradigm, listeners are shown a visual scene containing a number of objects and listen to one or two short sentences about the scene. They are asked to either inspect the visual scene while listening or to carry out the action depicted in the sentence(s) (e.g., 'Touch the blue square'). Listeners' eye movements directed to each object in the scene are monitored and time-locked to pre-defined time points in the auditory stimulus. Their predictions on the upcoming referent and sources for the predictions in the auditory signal are examined by analysing fixations to the relevant objects in the visual scene before the acoustic information on the referent is availabl
- ā¦