1 research outputs found
Automated Evaluation of Out-of-Context Errors
We present a new approach to evaluate computational models for the task of
text understanding by the means of out-of-context error detection. Through the
novel design of our automated modification process, existing large-scale data
sources can be adopted for a vast number of text understanding tasks. The data
is thereby altered on a semantic level, allowing models to be tested against a
challenging set of modified text passages that require to comprise a broader
narrative discourse. Our newly introduced task targets actual real-world
problems of transcription and translation systems by inserting authentic
out-of-context errors. The automated modification process is applied to the
2016 TEDTalk corpus. Entirely automating the process allows the adoption of
complete datasets at low cost, facilitating supervised learning procedures and
deeper networks to be trained and tested. To evaluate the quality of the
modification algorithm a language model and a supervised binary classification
model are trained and tested on the altered dataset. A human baseline
evaluation is examined to compare the results with human performance. The
outcome of the evaluation task indicates the difficulty to detect semantic
errors for machine-learning algorithms and humans, showing that the errors
cannot be identified when limited to a single sentence.Comment: LREC 2018, 5 pages, Out-of-Context Error Recognition, Automatic
Evaluation Dataset, Text Understanding, TEDTal