4,050 research outputs found
Automatic case acquisition from texts for process-oriented case-based reasoning
This paper introduces a method for the automatic acquisition of a rich case
representation from free text for process-oriented case-based reasoning. Case
engineering is among the most complicated and costly tasks in implementing a
case-based reasoning system. This is especially so for process-oriented
case-based reasoning, where more expressive case representations are generally
used and, in our opinion, actually required for satisfactory case adaptation.
In this context, the ability to acquire cases automatically from procedural
texts is a major step forward in order to reason on processes. We therefore
detail a methodology that makes case acquisition from processes described as
free text possible, with special attention given to assembly instruction texts.
This methodology extends the techniques we used to extract actions from cooking
recipes. We argue that techniques taken from natural language processing are
required for this task, and that they give satisfactory results. An evaluation
based on our implemented prototype extracting workflows from recipe texts is
provided.Comment: Sous presse, publication pr\'evue en 201
Detecting grammatical errors with treebank-induced, probabilistic parsers
Today's grammar checkers often use hand-crafted rule systems that define acceptable language. The development of such rule systems is labour-intensive and has to be repeated for each language. At the same time, grammars automatically induced from syntactically annotated corpora (treebanks) are successfully employed in other applications, for example text understanding and machine translation. At first glance, treebank-induced grammars seem to be unsuitable for grammar checking as they massively over-generate and fail to reject ungrammatical input due to their high robustness. We present three new methods for judging the grammaticality of a sentence with probabilistic, treebank-induced grammars, demonstrating that such grammars can be successfully applied to automatically judge the grammaticality of an input string. Our best-performing method exploits the differences between parse results for grammars trained on grammatical and ungrammatical treebanks. The second approach builds an estimator of the probability of the most likely parse using grammatical training data that has previously been parsed and annotated with parse probabilities. If the estimated probability of an input sentence (whose grammaticality is to be judged by the system) is higher by a certain amount than the actual parse probability, the sentence is flagged as ungrammatical. The third approach extracts discriminative parse tree fragments in the form of CFG rules from parsed grammatical and ungrammatical corpora and trains a binary classifier to distinguish grammatical from ungrammatical sentences. The three approaches are evaluated on a large test set of grammatical and ungrammatical sentences. The ungrammatical test set is generated automatically by inserting common grammatical errors into the British National Corpus. The results are compared to two traditional approaches, one that uses a hand-crafted, discriminative grammar, the XLE ParGram English LFG, and one based on part-of-speech n-grams. In addition, the baseline methods and the new methods are combined in a machine learning-based framework, yielding further improvements
Ontological Engineering For Source Code Generation
Source Code Generation (SCG) is the sub-domain of the Automatic Programming (AP) that helps programmers to program using high-level abstraction. Recently, many researchers investigated many techniques to access SCG. The problem is to use the appropriate technique to generate the source code due to its purposes and the inputs. This paper introduces a review and an analysis related SCG techniques. Moreover, comparisons are presented for: techniques mapping, Natural Language Processing (NLP), knowledge base, ontology, Specification Configuration Template (SCT) model and deep learnin
Robust Parsing for Ungrammatical Sentences
Natural Language Processing (NLP) is a research area that specializes in studying computational approaches to human language. However, not all of the natural language sentences are grammatically correct. Sentences that are ungrammatical, awkward, or too casual/colloquial tend to appear in a variety of NLP applications, from product reviews and social media analysis to intelligent language tutors or multilingual processing. In this thesis, we focus on parsing, because it is an essential component of many NLP applications. We investigate in what ways the performances of statistical parsers degrade when dealing with ungrammatical sentences. We also hypothesize that breaking up parse trees from problematic parts prevents NLP applications from degrading due to incorrect syntactic analysis.
A parser is robust if it can overlook problems such as grammar mistakes and produce a parse tree that closely resembles the correct analysis for the intended sentence. We develop a robustness evaluation metric and conduct a series of experiments to compare the performances of state-of-the-art parsers on the ungrammatical sentences. The evaluation results show that ungrammatical sentences present challenges for statistical parsers, because the well-formed syntactic trees they produce may not be appropriate for ungrammatical sentences. We also define a new framework for reviewing the parses of ungrammatical sentences and extracting the coherent parts whose syntactic analyses make sense. We call this task parse tree fragmentation. The experimental results suggest that the proposed overall fragmentation framework is a promising way to handle syntactically unusual sentences
Towards Orthographic and Grammatical Clinical Text Correction: a First Approach
Akats Gramatikalen Zuzenketa (GEC, ingelesetik, Grammatical Error Analysis)
Hizkuntza Naturalaren Prozesamenduaren azpieremu bat da, ortogra a, puntuazio edo
gramatika akatsak dituzten testuak automatikoki zuzentzea helburu duena. Orain arte,
bigarren hizkuntzako ikasleek ekoitzitako testuetara bideratu da gehien bat, ingelesez
idatzitako testuetara batez ere. Master-Tesi honetan gaztelaniaz idatzitako
mediku-txostenetarako Akats Gramatikalen Zuzenketa lantzen da. Arlo espezi ko hau ez
da asko esploratu orain arte, ez gaztelaniarako zentzu orokorrean, ezta domeinu
klinikorako konkretuki ere. Hasteko, IMEC (gaztelaniatik, Informes Médicos en Español
Corregidos) corpusa aurkezten da, eskuz zuzendutako mediku-txosten elektronikoen
bilduma paralelo berria. Corpusa automatikoki etiketatu da zeregin honetarako
egokitutako ERRANT tresna erabiliz. Horrez gain, hainbat esperimentu deskribatzen
dira, zeintzuetan sare neuronaletan oinarritutako sistemak ataza honetarako
diseinatutako baseline sistema batekin alderatzen diren.Grammatical Error Correction (GEC) is a sub field of Natural Language Processing that aims to automatically correct texts that include errors related to spelling, punctuation or grammar. So far, it has mainly focused on texts produced by second language learners, mostly in English. This Master's Thesis describes a first approach to Grammatical Error Correction for Spanish health records. This specific field has not been explored much until now, nor in Spanish in a general sense nor for the clinical domain specifically. For this purpose, the corpus IMEC (Informes Médicos en Español Corregidos) ---a manually-corrected parallel collection of Electronic Health Records--- is introduced. This corpus has been automatically annotated using the toolkit ERRANT, specialized in the automatic annotation of GEC parallel corpora, which was adapted to Spanish for this task. Furthermore, some experiments using neural networks and data augmentation are shown and compared with a baseline system also created specifically for this task
Proceedings of the 17th Annual Conference of the European Association for Machine Translation
Proceedings of the 17th Annual Conference of the European Association for Machine Translation (EAMT
Multiple Admissibility in Language Learning: : Judging Grammaticality using Unlabeled Data
We present our work on the problem of detection Multiple Admissibility (MA) in language learning. Multiple Admissibility occurs when more than one grammatical form of a word fits syntactically and semantically in a given context. In second-language education—in particular, in intelligent tutoring systems/computer-aided language learning (ITS/CALL), systems generate exercises automatically. MA implies that multiple alternative answers are possible. We treat the problem as a grammaticality judgement task. We train a neural network with an objective to label sentences as grammatical or ungrammatical, using a "simulated learner corpus": a dataset with correct text and with artificial errors, generated automatically. While MA occurs commonly in many languages, this paper focuses on learning Russian. We present a detailed classification of the types of constructions in Russian, in which MA is possible, and evaluate the model using a test set built from answers provided by users of the Revita language learning system.Peer reviewe
Mask the Correct Tokens: An Embarrassingly Simple Approach for Error Correction
Text error correction aims to correct the errors in text sequences such as
those typed by humans or generated by speech recognition models. Previous error
correction methods usually take the source (incorrect) sentence as encoder
input and generate the target (correct) sentence through the decoder. Since the
error rate of the incorrect sentence is usually low (e.g., 10\%), the
correction model can only learn to correct on limited error tokens but
trivially copy on most tokens (correct tokens), which harms the effective
training of error correction. In this paper, we argue that the correct tokens
should be better utilized to facilitate effective training and then propose a
simple yet effective masking strategy to achieve this goal. Specifically, we
randomly mask out a part of the correct tokens in the source sentence and let
the model learn to not only correct the original error tokens but also predict
the masked tokens based on their context information. Our method enjoys several
advantages: 1) it alleviates trivial copy; 2) it leverages effective training
signals from correct tokens; 3) it is a plug-and-play module and can be applied
to different models and tasks. Experiments on spelling error correction and
speech recognition error correction on Mandarin datasets and grammar error
correction on English datasets with both autoregressive and non-autoregressive
generation models show that our method improves the correction accuracy
consistently.Comment: main track of EMNLP 202
- …