20,892 research outputs found
Exploring the interplay of mode of discourse and proficiency level in ESL writing performance
Recent theory in discourse and practice in rhetoric has suggested that writers require different skills and strategies when writing for different purposes, and in using different genres and modes (Kinneavy, 1972; Carrell and Connor, 1991) in writing. The importance of taking into account these various aspectual skills and forms of writing is recognised in teaching (e.g. Scarcella and Oxford, 1992), and in the assessment of writing (e.g. Odell and Cooper, 1980). For instance, Odell and Cooper argued that any claims about writing ability cannot be made until students’ performance on a variety of writing tasks has been examined. Thus, the issue of what writing task(s) are to be included in a test is crucial, since a task will be regarded as useless if it does not provide the basis for
making generalisations regarding an individual’s writing ability. This paper presents the findings of a study on the effects of mode of discourse on L2 writing performance as well as the interplay between learner variable, namely, proficiency level and task variable, mode of discourse amongst Malaysian upper secondary ESL learners. The findings
provide some evidence for the need to re-examine issues of reliability and validity in test practice of manipulating variables in the design of assessment tasks to evaluate ESL
writing performance. Given the status and complexity of the writing skill, it stands to reason that studies into this area will continue to shed light onto how best the construct
can be understood, taught and tested to give a fair chance for language learners to exhibit their true ability and be reliably reported on
Examining Scientific Writing Styles from the Perspective of Linguistic Complexity
Publishing articles in high-impact English journals is difficult for scholars
around the world, especially for non-native English-speaking scholars (NNESs),
most of whom struggle with proficiency in English. In order to uncover the
differences in English scientific writing between native English-speaking
scholars (NESs) and NNESs, we collected a large-scale data set containing more
than 150,000 full-text articles published in PLoS between 2006 and 2015. We
divided these articles into three groups according to the ethnic backgrounds of
the first and corresponding authors, obtained by Ethnea, and examined the
scientific writing styles in English from a two-fold perspective of linguistic
complexity: (1) syntactic complexity, including measurements of sentence length
and sentence complexity; and (2) lexical complexity, including measurements of
lexical diversity, lexical density, and lexical sophistication. The
observations suggest marginal differences between groups in syntactical and
lexical complexity.Comment: 6 figure
Automatic assessment of spoken language proficiency of non-native children
This paper describes technology developed to automatically grade Italian
students (ages 9-16) on their English and German spoken language proficiency.
The students' spoken answers are first transcribed by an automatic speech
recognition (ASR) system and then scored using a feedforward neural network
(NN) that processes features extracted from the automatic transcriptions.
In-domain acoustic models, employing deep neural networks (DNNs), are derived
by adapting the parameters of an original out of domain DNN
Experiments with Universal CEFR Classification
The Common European Framework of Reference (CEFR) guidelines describe
language proficiency of learners on a scale of 6 levels. While the description
of CEFR guidelines is generic across languages, the development of automated
proficiency classification systems for different languages follow different
approaches. In this paper, we explore universal CEFR classification using
domain-specific and domain-agnostic, theory-guided as well as data-driven
features. We report the results of our preliminary experiments in monolingual,
cross-lingual, and multilingual classification with three languages: German,
Czech, and Italian. Our results show that both monolingual and multilingual
models achieve similar performance, and cross-lingual classification yields
lower, but comparable results to monolingual classification.Comment: to appear in the proceedings of The 13th Workshop on Innovative Use
of NLP for Building Educational Application
Self-imitating Feedback Generation Using GAN for Computer-Assisted Pronunciation Training
Self-imitating feedback is an effective and learner-friendly method for
non-native learners in Computer-Assisted Pronunciation Training. Acoustic
characteristics in native utterances are extracted and transplanted onto
learner's own speech input, and given back to the learner as a corrective
feedback. Previous works focused on speech conversion using prosodic
transplantation techniques based on PSOLA algorithm. Motivated by the visual
differences found in spectrograms of native and non-native speeches, we
investigated applying GAN to generate self-imitating feedback by utilizing
generator's ability through adversarial training. Because this mapping is
highly under-constrained, we also adopt cycle consistency loss to encourage the
output to preserve the global structure, which is shared by native and
non-native utterances. Trained on 97,200 spectrogram images of short utterances
produced by native and non-native speakers of Korean, the generator is able to
successfully transform the non-native spectrogram input to a spectrogram with
properties of self-imitating feedback. Furthermore, the transformed spectrogram
shows segmental corrections that cannot be obtained by prosodic
transplantation. Perceptual test comparing the self-imitating and correcting
abilities of our method with the baseline PSOLA method shows that the
generative approach with cycle consistency loss is promising
- …