95 research outputs found

    Using parse features for preposition selection and error detection

    Get PDF
    We evaluate the effect of adding parse features to a leading model of preposition usage. Results show a significant improvement in the preposition selection task on native speaker text and a modest increment in precision and recall in an ESL error detection task. Analysis of the parser output indicates that it is robust enough in the face of noisy non-native writing to extract useful information

    Morphological And Syntactic Errors Found In English Composition Written By The Students Of Daarut Taqwa Islamic Boarding School Klaten

    Get PDF
    Error analysis is a type of linguistic analysis that focuses on the errors learners make. It consists of a comparison between the errors made in the target language and that target language it self. This research focuses on errors in narrative English writing written by the students of grade five of KMI Islamic boarding school in Klaten (equal with grade eleven of senior high school) in order to know their classification of errors. The errors found in this research are classified into morphological and syntactic errors as the result of their effort to study the target language. The findings of the research show that the students create more errors in syntax rather than in morphology with different sources of errors, those are: language transfer, strategies of second language learning and overgeneralization. The findings also show that overgeneralization is the biggest source of errors, followed by strategies in second language learning and the least is language transfer. Based on the findings found in this research, it is hoped that it would trigger other researchers to build up further discussion and research on errors in this subject with the broader aspect and different subjects

    GenERRate: generating errors for use in grammatical error detection

    Get PDF
    This paper explores the issue of automatically generated ungrammatical data and its use in error detection, with a focus on the task of classifying a sentence as grammatical or ungrammatical. We present an error generation tool called GenERRate and show how GenERRate can be used to improve the performance of a classifier on learner data. We describe initial attempts to replicate Cambridge Learner Corpus errors using GenERRate

    The accuracy of computer-assisted feedback and students’ responses to it

    Get PDF
    Various researchers in second language acquisition have argued for the effectiveness of immediate rather than delayed feedback. In writing, truly immediate feedback is impractical, but computer-assisted feedback provides a quick way of providing feedback that also reduces the teacher’s workload. We explored the accuracy of feedback from Criterion®, a program developed by Educational Testing Service, and students’ responses to it. Thirty-two students received feedback from Criterion on four essays throughout a semester, with 16 receiving the feedback immediately and 16 receiving it several days after writing their essays. Results indicated that 75% of the error codes were correct, but that Criterion missed many language errors. Students responded to the correct error codes 73% of the time and responded to more of the codes over the course of the semester, while the condition—delayed versus immediate—did not affect their response rates nor their accuracy on the first drafts. Although we cannot support claims that immediate feedback may be more helpful, we believe that, with proper training, Criterion can help students correct certain aspects of language

    Systems Combination for Grammatical Error Correction

    Get PDF
    Master'sMASTER OF SCIENC

    SAUDI EFL LEARNERS KNOWLEDGE AND USE OF ENGLISH PREPOSITIONAL VERBS IN ACADEMIC WRITING

    Get PDF
    Prepositional verbs are essential for English as a foreign language (EFL) and English as a second language (ESL) learners in academic writing. However, most learners, regardless of their proficiency, encounter difficulties using these verbs, and there is a lack of research on these difficulties. This study sought to describe, analyze, and understand Saudi EFL learners knowledge and use of English prepositional verbs in academic writing. The study also assesses the relevant teaching contexts and reasons behind common errors.The study utilized a mixed-methods approach with data collected from a cloze test, multiple-choice test, and semi-structured interview. The two tests were administered to 46 fourth-year undergraduate Saudi EFL students (23 male, 23 female). The interviews were conducted with 20 participants chosen based on their test scores (seven with low scores, seven who scored in the middle, and six with high scores).The findings revealed Saudi EFL learners had extremely low knowledge of and poor performance using English prepositional verbs, committing frequent errors because of L1 interference and other issues. This study offers recommendations to develop EFL teaching methods and curricula to address this problem. One of the major suggestions is to encourage teachers to learn more about these verbs and expose students to more authentic input

    Judging grammaticality: experiments in sentence classification

    Get PDF
    A classifier which is capable of distinguishing a syntactically well formed sentence from a syntactically ill formed one has the potential to be useful in an L2 language-learning context. In this article, we describe a classifier which classifies English sentences as either well formed or ill formed using information gleaned from three different natural language processing techniques. We describe the issues involved in acquiring data to train such a classifier and present experimental results for this classifier on a variety of ill formed sentences. We demonstrate that (a) the combination of information from a variety of linguistic sources is helpful, (b) the trade-off between accuracy on well formed sentences and accuracy on ill formed sentences can be fine tuned by training multiple classifiers in a voting scheme, and (c) the performance of the classifier is varied, with better performance on transcribed spoken sentences produced by less advanced language learners

    Problems in Evaluating Grammatical Error Detection Systems

    Get PDF
    ABSTRACT Many evaluation issues for grammatical error detection have previously been overlooked, making it hard to draw meaningful comparisons between different approaches, even when they are evaluated on the same corpus. To begin with, the three-way contingency between a writer's sentence, the annotator's correction, and the system's output makes evaluation more complex than in some other NLP tasks, which we address by presenting an intuitive evaluation scheme. Of particular importance to error detection is the skew of the data -the low frequency of errors as compared to non-errors -which distorts some traditional measures of performance and limits their usefulness, leading us to recommend the reporting of raw measurements (true positives, false negatives, false positives, true negatives). Other issues that are particularly vexing for error detection focus on defining these raw measurements: specifying the size or scope of an error, properly treating errors as graded rather than discrete phenomena, and counting non-errors. We discuss recommendations for best practices with regard to reporting the results of system evaluation for these cases, recommendations which depend upon making clear one's assumptions and applications for error detection. By highlighting the problems with current error detection evaluation, the field will be better able to move forward
    corecore