6 research outputs found

    Transforming unstructured digital clinical notes for improved health literacy

    Get PDF
    Purpose – Clinical notes typically contain medical jargons and specialized words and phrases that are complicated and technical to most people, which is one of the most challenging obstacles in health information dissemination to consumers by healthcare providers. The authors aim to investigate how to leverage machine learning techniques to transform clinical notes of interest into understandable expressions. Design/methodology/approach – The authors propose a natural language processing pipeline that is capable of extracting relevant information from long unstructured clinical notes and simplifying lexicons by replacing medical jargons and technical terms. Particularly, the authors develop an unsupervised keywords matching method to extract relevant information from clinical notes. To automatically evaluate completeness of the extracted information, the authors perform a multi-label classification task on the relevant texts. To simplify lexicons in the relevant text, the authors identify complex words using a sequence labeler and leverage transformer models to generate candidate words for substitution. The authors validate the proposed pipeline using 58,167 discharge summaries from critical care services. Findings – The results show that the proposed pipeline can identify relevant information with high completeness and simplify complex expressions in clinical notes so that the converted notes have a high level of readability but a low degree of meaning change. Social implications – The proposed pipeline can help healthcare consumers well understand their medical information and therefore strengthen communications between healthcare providers and consumers for better care. Originality/value – An innovative pipeline approach is developed to address the health literacy problem confronted by healthcare providers and consumers in the ongoing digital transformation process in the healthcare industry

    Leveraging contextual representations with BiLSTM-based regressor for lexical complexity prediction

    Get PDF
    Lexical complexity prediction (LCP) determines the complexity level of words or phrases in a sentence. LCP has a significant impact on the enhancement of language translations, readability assessment, and text generation. However, the domain-specific technical word, the complex grammatical structure, the polysemy problem, the inter-word relationship, and dependencies make it challenging to determine the complexity of words or phrases. In this paper, we propose an integrated transformer regressor model named ITRM-LCP to estimate the lexical complexity of words and phrases where diverse contextual features are extracted from various transformer models. The transformer models are fine-tuned using the text-pair data. Then, a bidirectional LSTM-based regressor module is plugged on top of each transformer to learn the long-term dependencies and estimate the complexity scores. The predicted scores of each module are then aggregated to determine the final complexity score. We assess our proposed model using two benchmark datasets from shared tasks. Experimental findings demonstrate that our ITRM-LCP model obtains 10.2% and 8.2% improvement on the news and Wikipedia corpus of the CWI-2018 dataset, compared to the top-performing systems (DAT, CAMB, and TMU). Additionally, our ITRM-LCP model surpasses state-of-the-art LCP systems (DeepBlueAI, JUST-BLUE) by 1.5% and 1.34% for single and multi-word LCP tasks defined in the SemEval LCP-2021 task

    Predicting lexical complexity in English texts: the Complex 2.0 dataset

    Get PDF
    © 2022 The Authors. Published by Springer. This is an open access article available under a Creative Commons licence. The published version can be accessed at the following link on the publisher’s website: https://doi.org/10.1007/s10579-022-09588-2Identifying words which may cause difficulty for a reader is an essential step in most lexical text simplification systems prior to lexical substitution and can also be used for assessing the readability of a text. This task is commonly referred to as complex word identification (CWI) and is often modelled as a supervised classification problem. For training such systems, annotated datasets in which words and sometimes multi-word expressions are labelled regarding complexity are required. In this paper we analyze previous work carried out in this task and investigate the properties of CWI datasets for English. We develop a protocol for the annotation of lexical complexity and use this to annotate a new dataset, CompLex 2.0. We present experiments using both new and old datasets to investigate the nature of lexical complexity. We found that a Likert-scale annotation protocol provides an objective setting that is superior for identifying the complexity of words compared to a binary annotation protocol. We release a new dataset using our new protocol to promote the task of Lexical Complexity Prediction

    Complex word identification model for lexical simplification in the Malay language for non-native speakers

    Get PDF
    Text Simplification (TS) is the process of converting complex text into more easily understandable text. Lexical Simplification (LS), a method in TS, is the task of converting words into simpler words. Past studies have shown weaknesses in the LS first task, called Complex Word Identification (CWI), where simple and complex words have been misidentified in previous CWI model. The main objective of this study is to produce a Malay CWI model with three sub-objectives, i) To propose a dataset based on the state-of-the-art Malay corpus, ii) To produce a Malay CWI model, and iii) To perform an evaluation based on the standard statistical metrics; accuracy, precision, recall, F1-score, and G1-score. This model is constructed based on the development of the CWI model outlined by the previous researcher. This study consists of three modules, i) A Malay CWI dataset, ii) Malay CWI features with the new enhanced stemmer rules, and iii) A CWI model based on the Gradient Boosted Tree (GB) algorithm. The model is evaluated based on a state-of-the-art Malay corpus. This corpus is divided into training and testing data using k-fold cross-validation, where k=10. A series of tests were performed to ensure the best model was produced, including feature selection, generation of an improved stemmer algorithm, data imbalances, and classifier testing. The best model using the Gradient Boost algorithm showed an average accuracy of 92.55%, F1- score of 92.09% and G1-score of 89.7%. The F1-score was better than the English standard baseline score, with an increased difference of 16.3%. Three linguistic experts verified the results for 38 unseen sentences, and the results showed significantly positive results between the model built and the linguistic experts’ assessment. The proposed CWI model has improved the F1- score that has been obtained in second CWI shared task and positively affected non-native speakers and researchers
    corecore