10,878 research outputs found

    Age effects in first language attrition: speech perception by Korean-English bilinguals

    Get PDF
    This article has been awarded Open Materials and Open Data badges. All materials and data are publicly accessible via the Open Science Framework at https://osf.io/B2478 and at https://osf.io/G4C7Z. Learn more about the Open Practices badges from the Center for Open Science: https://osf.io/tvyxz/wiki.This study investigated how bilinguals’ perception of their first language (L1) differs according to age of reduced contact with L1 after immersion in a second language (L2). Twenty-one L1 Korean-L2 English bilinguals in the United States, ranging in age of reduced contact from 3 to 15 years, and 17 control participants in Korea were tested perceptually on three L1 contrasts differing in similarity to L2 contrasts. Compared to control participants, bilinguals were less accurate on L1-specific contrasts, and their accuracy was significantly correlated with age of reduced contact, an effect most pronounced for the contrast most dissimilar to L2. These findings suggest that the earlier bilinguals are extensively exposed to L2, the less likely they are to perceive L1 sounds accurately. However, this relationship is modulated by crosslinguistic similarity, and a turning point in L2 acquisition and L1 attrition of phonology appears to occur at around age 12.This research was supported by funding from the Ph.D. Program in Second Language Acquisition at the University of Maryland. The funding source was not involved in the design of the study, in the collection, analysis, and interpretation of data, in the writing of the manuscript, or in the decision to submit the manuscript for publication. We thank Dr. Youngkyu Kim at Ewha Womans University for his substantial support and Ms. Irene Jieun Ahn (formerly at Ewha Womans University and currently at Michigan State University) for her help during data collection in Korea. (Ph.D. Program in Second Language Acquisition at the University of Maryland

    Can monolinguals be like bilinguals? Evidence from dialect switching

    Get PDF
    Bilinguals rely on cognitive control mechanisms like selective activation and inhibition of lexical entries to prevent intrusions from the non-target language. We present cross-linguistic evidence that these mechanisms also operate in bidialectals. Thirty-two native German speakers who sometimes use the Öcher Platt dialect, and thirty-two native English speakers who sometimes use the Dundonian Scots dialect completed a dialect-switching task. Naming latencies were higher for switch than for non-switch trials, and lower for cognate compared to non-cognate nouns. Switch costs were symmetrical, regardless of whether participants actively used the dialect or not. In contrast, sixteen monodialectal English speakers, who performed the dialectswitching task after being trained on the Dundonian words, showed asymmetrical switch costs with longer latencies when switching back into Standard English. These results are reminiscent of findings for balanced vs. unbalanced bilinguals, and suggest that monolingual dialect speakers can recruit control mechanisms in similar ways as bilinguals

    Current Debates in the Theory and Teaching of English L2 Pronunciation

    Get PDF
    Ironically, the single concept that appears to be universal in the field of English pronunciation research and instruction, its common denominator as it were, is diversity. Research theory and classroom practice have both convincingly proven that explicit training may indeed lead to improvements in a learner’s clarity of speech, but it seems that everything else is open for debate. Variability in opinions begins with different interpretations of basic concepts, of individual speech sounds, syllables, phrases and utterances. Correctly identifying research foci, and by extension, educational priorities for classroom instruction also divides English L2 pronunciation professionals. Models are yet another area of contention – whether to focus on traditional pronunciation points of reference, e.g. features of Received Pronunciation or General American, or to concentrate instead on interactions where no native speaker is present, as proposed by the English as an International Language (EIL) framework. Next, dispelling doubts about its effectiveness can be a challenging endeavour when progress often manifests in small increments which require a significant investment of time and effort. Finally, the decision to incorporate digital technology and the Internet into the pronunciation classroom remains a dividing line between enthusiasts and those that call CALL (Computer-Assisted Language Learning) a fad that will soon pass. The purpose of this paper is to examine these hotly debated issues, while acknowledging that its emphasis on depth may be at the expense of breadth. Its scope will allow it to touch upon but the most significant disputes, those that bridge research theory with English L2 pronunciation classroom practice.Keywords: English L2 pronunciation instruction; curriculum and materials design; pronunciation teaching effectivenes

    A summary of research relating to reading in the intermediate grades

    Full text link
    Purpose: To develop and evaluate a method of quick perception with geography vocabulary to see if; (a) quick perception accelerates growth in comprehension, (b) effects speed of reading, and (c) improves reading ability. Materials used: (1) Vocabulary selected from: a) Atwood, The Americas, b) McConnel, Living in the Americas, c) Smith, World Folk. (2) Durrell-Sullivan Achievement Tests, Intermediate Forms A and B. (3) Oral Reading Tests for Speed from the "Durrell Analysis of Reading Difficulty". (4) Silent Reading and Vocabulary Inventory Tests constructed by the writer. (5) Lantern slide projector; screen; words and phrases typed on amber cellphone, faced with red carbon paper, enclosed in glass slides, hinged with tape at the top [TRUNCATED

    Quantitative Social Dialectology: Explaining Linguistic Variation Geographically and Socially

    Get PDF
    In this study we examine linguistic variation and its dependence on both social and geographic factors. We follow dialectometry in applying a quantitative methodology and focusing on dialect distances, and social dialectology in the choice of factors we examine in building a model to predict word pronunciation distances from the standard Dutch language to 424 Dutch dialects. We combine linear mixed-effects regression modeling with generalized additive modeling to predict the pronunciation distance of 559 words. Although geographical position is the dominant predictor, several other factors emerged as significant. The model predicts a greater distance from the standard for smaller communities, for communities with a higher average age, for nouns (as contrasted with verbs and adjectives), for more frequent words, and for words with relatively many vowels. The impact of the demographic variables, however, varied from word to word. For a majority of words, larger, richer and younger communities are moving towards the standard. For a smaller minority of words, larger, richer and younger communities emerge as driving a change away from the standard. Similarly, the strength of the effects of word frequency and word category varied geographically. The peripheral areas of the Netherlands showed a greater distance from the standard for nouns (as opposed to verbs and adjectives) as well as for high-frequency words, compared to the more central areas. Our findings indicate that changes in pronunciation have been spreading (in particular for low-frequency words) from the Hollandic center of economic power to the peripheral areas of the country, meeting resistance that is stronger wherever, for well-documented historical reasons, the political influence of Holland was reduced. Our results are also consistent with the theory of lexical diffusion, in that distances from the Hollandic norm vary systematically and predictably on a word by word basis

    “Students’ perceptions about the use of lyrics training to enhance listening comprehension”

    Get PDF
    Lyrics Training is a technological tool used in the educational field that will provide benefits in the motivation of the learning process and in the improvement of the student's skills, especially listening comprehension, which is a fundamental skill in the acquisition of a foreign language. This qualitative study aims to analyze the students’ perception of the use of Lyrics Training to enhance listening comprehension. Data was collected by means of a survey with eight questions and analyzed through the thematic analysis process. Participants were eighteen students from the eighth semester of English at the Technical University of Cotopaxi during April - August 2022 academic term. The main findings show the positive influence based on the participants' opinions through the use of Lyrics Training which enabled them to develop their listening comprehension. As for the benefits, they highlighted the accessibility of the website, which helped to improve spelling and pronunciation, as well as the enjoyment of learning, which allowed them to feel motivated and increase their vocabulary knowledge. On the other hand, they also faced some difficulties due to their lack of understanding of the colloquial words in the songs and technological problems that impeded the listening process. Finally, the application of Lyrics Training is considered a useful technological tool to learn and develop skills in a foreign language such as English. Based on these findings, it is suggested that teachers can adapt Lyrics Training in their classroom lessons to develop listening comprehension activities that will help them to achieve satisfactory performance in their students.Lyrics Training es una herramienta tecnológica utilizada en el ámbito educativo que aportará beneficios en la motivación del proceso de aprendizaje y en la mejora de las habilidades del alumno, especialmente la comprensión auditiva, la cual es una habilidad fundamental en la adquisición de una lengua extranjera. Este estudio cualitativo pretende analizar la percepción de los alumnos sobre el uso del Lyrics Training para mejorar la comprensión auditiva. Los datos se recogieron mediante una encuesta con ocho preguntas y se analizaron mediante el proceso de análisis temático. Los participantes fueron dieciocho estudiantes del octavo semestre de inglés de la Universidad Técnica de Cotopaxi durante el período académico abril - agosto 2022. Los principales hallazgos muestran la influencia positiva basada en las opiniones de los participantes a través del uso del Lyrics Training que les permitió desarrollar su comprensión auditiva. En cuanto a los beneficios, destacaron la accesibilidad del sitio web, que ayudó a mejorar la ortografía y la pronunciación, así como disfrutaron del aprendizaje, lo cual les permitió sentirse motivados y aumentar sus conocimientos de vocabulario. Por otro lado, también se enfrentaron a algunas dificultades debido a su falta de comprensión de las palabras coloquiales de las canciones y a problemas tecnológicos que impidieron el proceso de escucha. Por último, se considera que la aplicación del Lyrics Training es una herramienta tecnológica útil para aprender y desarrollar habilidades en una lengua extranjera como el Inglés. A partir de estos resultados, se sugiere que los profesores puedan adaptar Lyrics Training en sus lecciones de clase para desarrollar actividades de comprensión auditiva que les ayude a obtener un rendimiento satisfactorio en sus estudiante

    Representation of Time-Varying Stimuli by a Network Exhibiting Oscillations on a Faster Time Scale

    Get PDF
    Sensory processing is associated with gamma frequency oscillations (30–80 Hz) in sensory cortices. This raises the question whether gamma oscillations can be directly involved in the representation of time-varying stimuli, including stimuli whose time scale is longer than a gamma cycle. We are interested in the ability of the system to reliably distinguish different stimuli while being robust to stimulus variations such as uniform time-warp. We address this issue with a dynamical model of spiking neurons and study the response to an asymmetric sawtooth input current over a range of shape parameters. These parameters describe how fast the input current rises and falls in time. Our network consists of inhibitory and excitatory populations that are sufficient for generating oscillations in the gamma range. The oscillations period is about one-third of the stimulus duration. Embedded in this network is a subpopulation of excitatory cells that respond to the sawtooth stimulus and a subpopulation of cells that respond to an onset cue. The intrinsic gamma oscillations generate a temporally sparse code for the external stimuli. In this code, an excitatory cell may fire a single spike during a gamma cycle, depending on its tuning properties and on the temporal structure of the specific input; the identity of the stimulus is coded by the list of excitatory cells that fire during each cycle. We quantify the properties of this representation in a series of simulations and show that the sparseness of the code makes it robust to uniform warping of the time scale. We find that resetting of the oscillation phase at stimulus onset is important for a reliable representation of the stimulus and that there is a tradeoff between the resolution of the neural representation of the stimulus and robustness to time-warp. Author Summary Sensory processing of time-varying stimuli, such as speech, is associated with high-frequency oscillatory cortical activity, the functional significance of which is still unknown. One possibility is that the oscillations are part of a stimulus-encoding mechanism. Here, we investigate a computational model of such a mechanism, a spiking neuronal network whose intrinsic oscillations interact with external input (waveforms simulating short speech segments in a single acoustic frequency band) to encode stimuli that extend over a time interval longer than the oscillation's period. The network implements a temporally sparse encoding, whose robustness to time warping and neuronal noise we quantify. To our knowledge, this study is the first to demonstrate that a biophysically plausible model of oscillations occurring in the processing of auditory input may generate a representation of signals that span multiple oscillation cycles.National Science Foundation (DMS-0211505); Burroughs Wellcome Fund; U.S. Air Force Office of Scientific Researc
    corecore