14 research outputs found
Complex Word Identification: Challenges in Data Annotation and System Performance
This paper revisits the problem of complex word identification (CWI)
following up the SemEval CWI shared task. We use ensemble classifiers to
investigate how well computational methods can discriminate between complex and
non-complex words. Furthermore, we analyze the classification performance to
understand what makes lexical complexity challenging. Our findings show that
most systems performed poorly on the SemEval CWI dataset, and one of the
reasons for that is the way in which human annotation was performed.Comment: Proceedings of the 4th Workshop on NLP Techniques for Educational
Applications (NLPTEA 2017
Collecting and Exploring Everyday Language for Predicting Psycholinguistic Properties of Words
Conference paper: Collecting and Exploring Everyday Language for Predicting Psycholinguistic Properties of Word
Lexical Simplification for Non-Native English Speakers
Lexical Simplification is the process of replacing complex words in texts to create simpler, more easily comprehensible alternatives. It has proven very useful as an assistive tool for users who may find complex texts challenging. Those who suffer from Aphasia and Dyslexia are among the most common beneficiaries of such technology. In this thesis we focus on Lexical Simplification for English using non-native English speakers as the target audience. Even though they number in hundreds of millions, there are very few contributions that aim to address the needs of these users. Current work is unable to provide solutions for this audience due to lack of user studies, datasets and resources. Furthermore, existing work in Lexical Simplification is limited regardless of the target audience, as it tends to focus on certain steps of the simplification process and disregard others, such as the automatic detection of the words that require simplification. We introduce a series of contributions to the area of Lexical Simplification that range from user studies and resulting datasets to novel methods for all steps of the process and evaluation techniques. In order to understand the needs of non-native English speakers, we conducted three user studies with 1,000 users in total. These studies demonstrated that the number of words deemed complex by non-native speakers of English correlates with their level of English proficiency and appears to decrease with age. They also indicated that although words deemed complex tend to be much less ambiguous and less frequently found in corpora, the complexity of words also depends on the context in which they occur. Based on these findings, we propose an ensemble approach which achieves state-of-the-art performance in identifying words that challenge non-native speakers of English. Using the insight and data gathered, we created two new approaches to Lexical Simplification that address the needs of non-native English speakers: joint and pipelined. The joint approach employs resource-light neural language models to simplify words deemed complex in a single step. While its performance was unsatisfactory, it proved useful when paired with pipelined approaches. Our pipelined simplifier generates candidate replacements for complex words using new, context-aware word embedding models, filters them for grammaticality and meaning preservation using a novel unsupervised ranking approach, and finally ranks them for simplicity using a novel supervised ranker that learns a model based on the needs of non-native English speakers. In order to test these and previous approaches, we designed LEXenstein, a framework for Lexical Simplification, and compiled NNSeval, a dataset that accounts for the needs of non-native English speakers. Comparisons against hundreds of previous approaches as well as the variants we proposed showed that our pipelined approach outperforms all others. Finally, we introduce PLUMBErr, a new automatic error identification framework for Lexical Simplification. Using this framework, we assessed the type and number of errors made by our pipelined approach throughout the simplification process and found that combining our ensemble complex word identifier with our pipelined simplifier yields a system that makes up to 25% fewer mistakes compared to the previous state-of-the-art strategies during the simplification process
Can lies be faked? Comparing low-stakes and high-stakes deception video datasets from a Machine Learning perspective
Despite the great impact of lies in human societies and a meager 54% human
accuracy for Deception Detection (DD), Machine Learning systems that perform
automated DD are still not viable for proper application in real-life settings
due to data scarcity. Few publicly available DD datasets exist and the creation
of new datasets is hindered by the conceptual distinction between low-stakes
and high-stakes lies. Theoretically, the two kinds of lies are so distinct that
a dataset of one kind could not be used for applications for the other kind.
Even though it is easier to acquire data on low-stakes deception since it can
be simulated (faked) in controlled settings, these lies do not hold the same
significance or depth as genuine high-stakes lies, which are much harder to
obtain and hold the practical interest of automated DD systems. To investigate
whether this distinction holds true from a practical perspective, we design
several experiments comparing a high-stakes DD dataset and a low-stakes DD
dataset evaluating their results on a Deep Learning classifier working
exclusively from video data. In our experiments, a network trained in
low-stakes lies had better accuracy classifying high-stakes deception than
low-stakes, although using low-stakes lies as an augmentation strategy for the
high-stakes dataset decreased its accuracy.Comment: 11 pages, 3 figure
A Lightweight Regression Method to Infer Psycholinguistic Properties for Brazilian Portuguese
Psycholinguistic properties of words have been used in various approaches to
Natural Language Processing tasks, such as text simplification and readability
assessment. Most of these properties are subjective, involving costly and
time-consuming surveys to be gathered. Recent approaches use the limited
datasets of psycholinguistic properties to extend them automatically to large
lexicons. However, some of the resources used by such approaches are not
available to most languages. This study presents a method to infer
psycholinguistic properties for Brazilian Portuguese (BP) using regressors
built with a light set of features usually available for less resourced
languages: word length, frequency lists, lexical databases composed of school
dictionaries and word embedding models. The correlations between the properties
inferred are close to those obtained by related works. The resulting resource
contains 26,874 words in BP annotated with concreteness, age of acquisition,
imageability and subjective frequency.Comment: Paper accepted for TSD201
SemEval-2021 task 1: Lexical complexity prediction
© 2021 The Authors. Published by ACL. This is an open access article available under a Creative Commons licence.
The published version can be accessed at the following link on the publisher’s website: https://aclanthology.org/2021.semeval-1.1This paper presents the results and main findings of SemEval-2021 Task 1 - Lexical Complexity Prediction. We provided participants with an augmented version of the CompLex Corpus (Shardlow et al. 2020). CompLex is an English multi-domain corpus in which words and multi-word expressions (MWEs) were annotated with respect to their complexity using a five point Likert scale. SemEval-2021 Task 1 featured two Sub-tasks: Sub-task 1 focused on single words and Sub-task 2 focused on MWEs. The competition attracted 198 teams in total, of which 54 teams submitted official runs on the test data to Sub-task 1 and 37 to Sub-task 2.Published versio
Unsupervised Lexical Simplification for Non-Native Speakers
Lexical Simplification is the task of replacing complex words with simpler alternatives. We propose a novel, unsupervised approach for the task. It relies on two resources: a corpus of subtitles and a new type of word embeddings model that accounts for the ambiguity of words. We compare the performance of our approach and many others over a new evaluation dataset, which accounts for the simplification needs of 400 non-native English speakers. The experiments show that our approach outperforms state-of-the-art work in Lexical Simplification