7 research outputs found

    GenERRate: generating errors for use in grammatical error detection

    Get PDF
    This paper explores the issue of automatically generated ungrammatical data and its use in error detection, with a focus on the task of classifying a sentence as grammatical or ungrammatical. We present an error generation tool called GenERRate and show how GenERRate can be used to improve the performance of a classifier on learner data. We describe initial attempts to replicate Cambridge Learner Corpus errors using GenERRate

    Detecting grammatical errors with treebank-induced, probabilistic parsers

    Get PDF
    Today's grammar checkers often use hand-crafted rule systems that define acceptable language. The development of such rule systems is labour-intensive and has to be repeated for each language. At the same time, grammars automatically induced from syntactically annotated corpora (treebanks) are successfully employed in other applications, for example text understanding and machine translation. At first glance, treebank-induced grammars seem to be unsuitable for grammar checking as they massively over-generate and fail to reject ungrammatical input due to their high robustness. We present three new methods for judging the grammaticality of a sentence with probabilistic, treebank-induced grammars, demonstrating that such grammars can be successfully applied to automatically judge the grammaticality of an input string. Our best-performing method exploits the differences between parse results for grammars trained on grammatical and ungrammatical treebanks. The second approach builds an estimator of the probability of the most likely parse using grammatical training data that has previously been parsed and annotated with parse probabilities. If the estimated probability of an input sentence (whose grammaticality is to be judged by the system) is higher by a certain amount than the actual parse probability, the sentence is flagged as ungrammatical. The third approach extracts discriminative parse tree fragments in the form of CFG rules from parsed grammatical and ungrammatical corpora and trains a binary classifier to distinguish grammatical from ungrammatical sentences. The three approaches are evaluated on a large test set of grammatical and ungrammatical sentences. The ungrammatical test set is generated automatically by inserting common grammatical errors into the British National Corpus. The results are compared to two traditional approaches, one that uses a hand-crafted, discriminative grammar, the XLE ParGram English LFG, and one based on part-of-speech n-grams. In addition, the baseline methods and the new methods are combined in a machine learning-based framework, yielding further improvements

    Utilisation de méthodes linguistiques pour la détection et la correction automatisées d'erreurs produites par des francophones écrivant en anglais

    Get PDF
    The starting point of this research is the observation that French speakers writing in English in personal or professional contexts still encounter grammatical difficulties, even at intermediate to advanced levels. The first tools they can reach for to correct those errors, automatic grammar checkers, do not offer corrections for a large number of the errors produced by French-speaking users of English, especially because those tools are rarely designed for L2 users. We propose to identify the difficulties encountered by these speakers through the detection of errors in a representative corpus, and to create a linguistic model of errors and corrections. The model is the result of the thorough linguistic analysis of the phenomena at stake, based on grammatical information available in reference grammars, corpus studies, and the analysis of erroneous segments. The validity of the use of linguistic methods is established through the implementation of detection and correction rules in a functional platform, followed by the evaluation of the results of the application of those rules on L1 and L2 English corpora.Le point de départ de cette recherche est le constat des difficultés persistantes rencontrées par les francophones de niveau intermédiaire à avancé lors de la production de textes en anglais, dans des contextes personnels ou professionnels. Les premiers outils utilisés pour remédier à ces erreurs, les correcteurs grammaticaux automatiques, ne prennent pas en compte de nombreuses erreurs produites par les francophones utilisant l'anglais, notamment car ces correcteurs sont rarement adaptés à un public ayant l'anglais comme L2. Nous proposons d'identifier précisément les difficultés rencontrées par ce public cible à partir du relevé des erreurs dans un corpus adapté, et d'élaborer une modélisation linguistique des erreurs et des corrections à apporter. Cette modélisation est fondée sur une analyse linguistique approfondie des phénomènes concernés, à partir d'indications grammaticales, d'études de corpus, et de l'analyse des segments erronés. La validité de l'utilisation de méthodes linguistiques est établie par l'implémentation informatique des règles de détection et de correction, suivie de l'évaluation des résultats de l'application de ces règles sur des corpus d'anglais L1 et L2

    Unsupervised Evaluation of Parser Robustness

    No full text
    corecore