3 research outputs found

    Predicting lexical complexity in English texts: the Complex 2.0 dataset

    Get PDF
    © 2022 The Authors. Published by Springer. This is an open access article available under a Creative Commons licence. The published version can be accessed at the following link on the publisher’s website: https://doi.org/10.1007/s10579-022-09588-2Identifying words which may cause difficulty for a reader is an essential step in most lexical text simplification systems prior to lexical substitution and can also be used for assessing the readability of a text. This task is commonly referred to as complex word identification (CWI) and is often modelled as a supervised classification problem. For training such systems, annotated datasets in which words and sometimes multi-word expressions are labelled regarding complexity are required. In this paper we analyze previous work carried out in this task and investigate the properties of CWI datasets for English. We develop a protocol for the annotation of lexical complexity and use this to annotate a new dataset, CompLex 2.0. We present experiments using both new and old datasets to investigate the nature of lexical complexity. We found that a Likert-scale annotation protocol provides an objective setting that is superior for identifying the complexity of words compared to a binary annotation protocol. We release a new dataset using our new protocol to promote the task of Lexical Complexity Prediction

    Complex word identification model for lexical simplification in the Malay language for non-native speakers

    Get PDF
    Text Simplification (TS) is the process of converting complex text into more easily understandable text. Lexical Simplification (LS), a method in TS, is the task of converting words into simpler words. Past studies have shown weaknesses in the LS first task, called Complex Word Identification (CWI), where simple and complex words have been misidentified in previous CWI model. The main objective of this study is to produce a Malay CWI model with three sub-objectives, i) To propose a dataset based on the state-of-the-art Malay corpus, ii) To produce a Malay CWI model, and iii) To perform an evaluation based on the standard statistical metrics; accuracy, precision, recall, F1-score, and G1-score. This model is constructed based on the development of the CWI model outlined by the previous researcher. This study consists of three modules, i) A Malay CWI dataset, ii) Malay CWI features with the new enhanced stemmer rules, and iii) A CWI model based on the Gradient Boosted Tree (GB) algorithm. The model is evaluated based on a state-of-the-art Malay corpus. This corpus is divided into training and testing data using k-fold cross-validation, where k=10. A series of tests were performed to ensure the best model was produced, including feature selection, generation of an improved stemmer algorithm, data imbalances, and classifier testing. The best model using the Gradient Boost algorithm showed an average accuracy of 92.55%, F1- score of 92.09% and G1-score of 89.7%. The F1-score was better than the English standard baseline score, with an increased difference of 16.3%. Three linguistic experts verified the results for 38 unseen sentences, and the results showed significantly positive results between the model built and the linguistic experts’ assessment. The proposed CWI model has improved the F1- score that has been obtained in second CWI shared task and positively affected non-native speakers and researchers

    An Automatic Modern Standard Arabic Text Simplification System: A Corpus-Based Approach

    Get PDF
    This thesis brings together an overview of Text Readability (TR) about Text Simplification (TS) with an application of both to Modern Standard Arabic (MSA). It will present our findings on using automatic TR and TS tools to teach MSA, along with challenges, limitations, and recommendations about enhancing the TR and TS models. Reading is one of the most vital tasks that provide language input for communication and comprehension skills. It is proved that the use of long sentences, connected sentences, embedded phrases, passive voices, non- standard word orders, and infrequent words can increase the text difficulty for people with low literacy levels, as well as second language learners. The thesis compares the use of sentence embeddings of different types (fastText, mBERT, XLM-R and Arabic-BERT), as well as traditional language features such as POS tags, dependency trees, readability scores and frequency lists for language learners. The accuracy of the 3-way CEFR (The Common European Framework of Reference for Languages Proficiency Levels) classification is F-1 of 0.80 and 0.75 for Arabic-Bert and XLM-R classification, respectively and 0.71 Spearman correlation for the regression task. At the same time, the binary difficulty classifier reaches F-1 0.94 and F-1 0.98 for the sentence-pair semantic similarity classifier. TS is an NLP task aiming to reduce the linguistic complexity of the text while maintaining its meaning and original information (Siddharthan, 2002; Camacho Collados, 2013; Saggion, 2017). The simplification study experimented using two approaches: (i) a classification approach and (ii) a generative approach. It then evaluated the effectiveness of these methods using the BERTScore (Zhang et al., 2020) evaluation metric. The simple sentences produced by the mT5 model achieved P 0.72, R 0.68 and F-1 0.70 via BERTScore while combining Arabic- BERT and fastText achieved P 0.97, R 0.97 and F-1 0.97. To reiterate, this research demonstrated the effectiveness of the implementation of a corpus-based method combined with extracting extensive linguistic features via the latest NLP techniques. It provided insights which can be of use in various Arabic corpus studies and NLP tasks such as translation for educational purposes
    corecore