22 research outputs found
RankME: Reliable Human Ratings for Natural Language Generation
Human evaluation for natural language generation (NLG) often suffers from
inconsistent user ratings. While previous research tends to attribute this
problem to individual user preferences, we show that the quality of human
judgements can also be improved by experimental design. We present a novel
rank-based magnitude estimation method (RankME), which combines the use of
continuous scales and relative assessments. We show that RankME significantly
improves the reliability and consistency of human ratings compared to
traditional evaluation methods. In addition, we show that it is possible to
evaluate NLG systems according to multiple, distinct criteria, which is
important for error analysis. Finally, we demonstrate that RankME, in
combination with Bayesian estimation of system quality, is a cost-effective
alternative for ranking multiple NLG systems.Comment: Accepted to NAACL 2018 (The 2018 Conference of the North American
Chapter of the Association for Computational Linguistics
Improving Readability of Swedish Electronic Health Records through Lexical Simplification: First Results
Abstract This paper describes part of an ongoing effort to improve the readability of Swedish electronic health records (EHRs). An EHR contains systematic documentation of a single patient's medical history across time, entered by healthcare professionals with the purpose of enabling safe and informed care. Linguistically, medical records exemplify a highly specialised domain, which can be superficially characterised as having telegraphic sentences involving displaced or missing words, abundant abbreviations, spelling variations including misspellings, and terminology. We report results on lexical simplification of Swedish EHRs, by which we mean detecting the unknown, out-ofdictionary words and trying to resolve them either as compounded known words, abbreviations or misspellings
Reducing lexical complexity as a tool to increase text accessibility for children with dyslexia
International audienceLexical complexity plays a central role in readability, particularly for dyslexic children and poor readers because of their slow and laborious decoding and word recognition skills. Although some features to aid readability may be common to many languages (e.g., the majority of 'easy' words are of low frequency), we believe that lexical complexity is mainly language-specific. In this paper, we define lexical complexity for French and we present a pilot study on the effects of text simplification in dyslexic children. The participants were asked to read out loud original and manually simplified versions of a standardized French text corpus and to answer comprehension questions after reading each text. The analysis of the results shows that the simplifications performed were beneficial in terms of reading speed and they reduced the number of reading errors (mainly lexical ones) without a loss in comprehension. Although the number of participants in this study was rather small (N=10), the results are promising and contribute to the development of applications in computational linguistics
Medical Text Simplification: Optimizing for Readability with Unlikelihood Training and Reranked Beam Search Decoding
Text simplification has emerged as an increasingly useful application of AI
for bridging the communication gap in specialized fields such as medicine,
where the lexicon is often dominated by technical jargon and complex
constructs. Despite notable progress, methods in medical simplification
sometimes result in the generated text having lower quality and diversity. In
this work, we explore ways to further improve the readability of text
simplification in the medical domain. We propose (1) a new unlikelihood loss
that encourages generation of simpler terms and (2) a reranked beam search
decoding method that optimizes for simplicity, which achieve better performance
on readability metrics on three datasets. This study's findings offer promising
avenues for improving text simplification in the medical field.Comment: EMNLP 2023 Finding
The Corpus of Basque Simplified Texts (CBST)
In this paper we present the corpus of Basque simplified texts. This corpus compiles 227 original sentences of science popularisation domain and two simplified versions of each sentence. The simplified versions have been created following different approaches: the structural, by a court translator who considers easy-to-read guidelines and the intuitive, by a teacher based on her experience. The aim of this corpus is to make a comparative analysis of simplified text. To that end, we also present the annotation scheme we have created to annotate the corpus. The annotation scheme is divided into eight macro-operations: delete, merge, split, transformation, insert, reordering, no operation and other. These macro-operations can be classified into different operations. We also relate our work and results to other languages. This corpus will be used to corroborate the decisions taken and to improve the design of the automatic text simplification system for Basque.Cerrar texto de financiación
Itziar Gonzalez-Dios's work was funded by a Ph.D. grant from the Basque Government and a postdoctoral grant for the new doctors from the Vice-rectory of Research of the University of the Basque Country (UPV/EHU). We are very grateful to the translator and teacher that simplified the texts. We also want to thank Dominique Brunato, Felice Dell'Orletta and Giulia Venturi for their help with the Italian annotation scheme and their suggestions when analysing the corpus and Oier Lopez de Lacalle for his help with the statistical analysis. We also want to express our gratitude to the anonymous reviewers for their comments and suggestions. This research was supported by the Basque Government (IT344-10), and the Spanish Ministry of Economy and Competitiveness, EXTRECM Project (TIN2013-46616-C2-1-R)
An eye-tracking evaluation of some parser complexity metrics
Information theoretic measures of incremental parser load were generated from a phrase structure parser and a dependency parser and then compared with incremental eye movement metrics collected for the same temporarily syntactically ambiguous sentences, focussing on the disambiguating word. The findings show that the
surprisal and entropy reduction metrics computed over a phrase structure grammar make good candidates for predictors
of text readability for human comprehenders. This leads to a suggestion for the use of such metrics in Natural Language Generation (NLG
An evaluation of syntactic simplification rules for people with autism
Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations (PITR) at the 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2014)Syntactically complex sentences constitute an obstacle for some people with Autistic Spectrum Disorders. This paper evaluates a set of simplification rules specifically designed for tackling complex and compound sentences. In total, 127 different rules were developed for the rewriting of complex sentences and 56 for the rewriting of compound sentences. The evaluation assessed the accuracy of these rules individually and revealed that fully automatic conversion of these sentences into a more accessible form is not very reliable.EC FP7-ICT-2011-
The Corpus of Basque Simplified Texts (CBST)
In this paper we present the corpus of Basque simplified texts. This corpus compiles 227 original sentences of science popularisation domain and two simplified versions of each sentence. The simplified versions have been created following different approaches: the structural, by a court translator who considers easy-to-read guidelines and the intuitive, by a teacher based on her experience. The aim of this corpus is to make a comparative analysis of simplified text. To that end, we also present the annotation scheme we have created to annotate the corpus. The annotation scheme is divided into eight macro-operations: delete, merge, split, transformation, insert, reordering, no operation and other. These macro-operations can be classified into different operations. We also relate our work and results to other languages. This corpus will be used to corroborate the decisions taken and to improve the design of the automatic text simplification system for Basque.Cerrar texto de financiación
Itziar Gonzalez-Dios's work was funded by a Ph.D. grant from the Basque Government and a postdoctoral grant for the new doctors from the Vice-rectory of Research of the University of the Basque Country (UPV/EHU). We are very grateful to the translator and teacher that simplified the texts. We also want to thank Dominique Brunato, Felice Dell'Orletta and Giulia Venturi for their help with the Italian annotation scheme and their suggestions when analysing the corpus and Oier Lopez de Lacalle for his help with the statistical analysis. We also want to express our gratitude to the anonymous reviewers for their comments and suggestions. This research was supported by the Basque Government (IT344-10), and the Spanish Ministry of Economy and Competitiveness, EXTRECM Project (TIN2013-46616-C2-1-R)
Analyzing Text Complexity and Text Simplification: Connecting Linguistics, Processing and Educational Applications
Reading plays an important role in the process of learning and knowledge acquisition
for both children and adults. However, not all texts are accessible to every
prospective reader. Reading difficulties can arise when there is a mismatch between
a reader’s language proficiency and the linguistic complexity of the text
they read. In such cases, simplifying the text in its linguistic form while retaining
all the content could aid reader comprehension. In this thesis, we study text
complexity and simplification from a computational linguistic perspective.
We propose a new approach to automatically predict the text complexity using
a wide range of word level and syntactic features of the text. We show that this
approach results in accurate, generalizable models of text readability that work
across multiple corpora, genres and reading scales. Moving from documents to
sentences, We show that our text complexity features also accurately distinguish
different versions of the same sentence in terms of the degree of simplification
performed. This is useful in evaluating the quality of simplification performed by
a human expert or a machine-generated output and for choosing targets to simplify
in a difficult text. We also experimentally show the effect of text complexity on
readers’ performance outcomes and cognitive processing through an eye-tracking
experiment.
Turning from analyzing text complexity and identifying sentential simplifications
to generating simplified text, one can view automatic text simplification as a
process of translation from English to simple English. In this thesis, we propose
a statistical machine translation based approach for text simplification, exploring
the role of focused training data and language models in the process.
Exploring the linguistic complexity analysis further, we show that our text
complexity features can be useful in assessing the language proficiency of English
learners. Finally, we analyze German school textbooks in terms of their
linguistic complexity, across various grade levels, school types and among different
publishers by applying a pre-existing set of text complexity features developed
for German