25 research outputs found

    A UMLS-based spell checker for natural language processing in vaccine safety

    Get PDF
    BACKGROUND: The Institute of Medicine has identified patient safety as a key goal for health care in the United States. Detecting vaccine adverse events is an important public health activity that contributes to patient safety. Reports about adverse events following immunization (AEFI) from surveillance systems contain free-text components that can be analyzed using natural language processing. To extract Unified Medical Language System (UMLS) concepts from free text and classify AEFI reports based on concepts they contain, we first needed to clean the text by expanding abbreviations and shortcuts and correcting spelling errors. Our objective in this paper was to create a UMLS-based spelling error correction tool as a first step in the natural language processing (NLP) pipeline for AEFI reports. METHODS: We developed spell checking algorithms using open source tools. We used de-identified AEFI surveillance reports to create free-text data sets for analysis. After expansion of abbreviated clinical terms and shortcuts, we performed spelling correction in four steps: (1) error detection, (2) word list generation, (3) word list disambiguation and (4) error correction. We then measured the performance of the resulting spell checker by comparing it to manual correction. RESULTS: We used 12,056 words to train the spell checker and tested its performance on 8,131 words. During testing, sensitivity, specificity, and positive predictive value (PPV) for the spell checker were 74% (95% CI: 74–75), 100% (95% CI: 100–100), and 47% (95% CI: 46%–48%), respectively. CONCLUSION: We created a prototype spell checker that can be used to process AEFI reports. We used the UMLS Specialist Lexicon as the primary source of dictionary terms and the WordNet lexicon as a secondary source. We used the UMLS as a domain-specific source of dictionary terms to compare potentially misspelled words in the corpus. The prototype sensitivity was comparable to currently available tools, but the specificity was much superior. The slow processing speed may be improved by trimming it down to the most useful component algorithms. Other investigators may find the methods we developed useful for cleaning text using lexicons specific to their area of interest

    Unsupervised Context-Sensitive Spelling Correction of English and Dutch Clinical Free-Text with Word and Character N-Gram Embeddings

    Full text link
    We present an unsupervised context-sensitive spelling correction method for clinical free-text that uses word and character n-gram embeddings. Our method generates misspelling replacement candidates and ranks them according to their semantic fit, by calculating a weighted cosine similarity between the vectorized representation of a candidate and the misspelling context. To tune the parameters of this model, we generate self-induced spelling error corpora. We perform our experiments for two languages. For English, we greatly outperform off-the-shelf spelling correction tools on a manually annotated MIMIC-III test set, and counter the frequency bias of a noisy channel model, showing that neural embeddings can be successfully exploited to improve upon the state-of-the-art. For Dutch, we also outperform an off-the-shelf spelling correction tool on manually annotated clinical records from the Antwerp University Hospital, but can offer no empirical evidence that our method counters the frequency bias of a noisy channel model in this case as well. However, both our context-sensitive model and our implementation of the noisy channel model obtain high scores on the test set, establishing a state-of-the-art for Dutch clinical spelling correction with the noisy channel model.Comment: Appears in volume 7 of the CLIN Journal, http://www.clinjournal.org/biblio/volum

    An application of Malay short-form word conversion using Levenshtein distance / Azilawati Azizan ... [et al.]

    Get PDF
    Formerly, short-form word was widely used in the field of journalism. However, nowadays, short-form word has been widely used by many people, especially in online communication. These short-form words trigger problems in the field of data mining, especially those involving online text processing. It leads to inaccurate result of text mining activities. On the other hand, only few works have investigated on Malay short-form word identification and conversion. Therefore, this work aims to develop an application that can identify and convert Malay short-form words into its’ full word. In order to develop this application, the short-form rules need to be carefully examined. The formal rules from Dewan Bahasa & Pustaka (DBP) are used as the primary reference for generating the short form word identification algorithm. While for the conversion algorithm, Levenshtein Distance (LD) is used to measure the similarity. The rule-based technique is also used as a complement to LD technique. As a result, 70.27% of the Malay short-form words have been correctly converted into their full words. The conversion rate is quite promising, and this work can be further strengthened by incorporating more rules into the algorithm

    Improving Readability of Swedish Electronic Health Records through Lexical Simplification: First Results

    Get PDF
    Abstract This paper describes part of an ongoing effort to improve the readability of Swedish electronic health records (EHRs). An EHR contains systematic documentation of a single patient's medical history across time, entered by healthcare professionals with the purpose of enabling safe and informed care. Linguistically, medical records exemplify a highly specialised domain, which can be superficially characterised as having telegraphic sentences involving displaced or missing words, abundant abbreviations, spelling variations including misspellings, and terminology. We report results on lexical simplification of Swedish EHRs, by which we mean detecting the unknown, out-ofdictionary words and trying to resolve them either as compounded known words, abbreviations or misspellings

    Comparison of automatic spell correction methods

    Get PDF
    Selle töö käigus võrreldakse kolme erinevat õigekirjavigade paranduse meetodeid. Kirjel-datakse nende algoritme ja arendamist. Võrreldakse lisaks automaatset õigekirjaparandust ja kasutaja poolt valitavaid parandust üldiselt.This paper compares three different automatic spell corrector methods. It also describes spell correctors algorithms and development of those algorithms. In the paper there is also a comparison of automatic spell correction and spell checker in general

    Implementación de un corrector ortográfico para lenguas originarias del Perú. Caso de estudio: shipibo-konibo

    Get PDF
    En el Perú existen diversas lenguas originarias como el shipibo-konibo, asháninka, el kakataibo, entre otras [Rivera, 2001]. Estas lenguas se caracterizan porque son transmitidas a través de cuentos, poesía y otros medios orales de generación en generación por lo que la forma de aprender la lengua es variada. Esto provoca que haya diferencia en la forma de escribir entre las comunidades, incluso entre personas de una misma comunidad [Aikman, 1999]. Por esta razón, los textos que se escribieron en estas lenguas, como el shipibo-konibo, no dispusieron de un estándar ortográfico del cual guiarse, además que no tenían una necesidad de seguirlo. Sin embargo, gracias al apoyo del gobierno para impulsar la inclusión social, se implementó el programa “Incluir para crecer” [Jara Males, Gonzales Acer, 2015] que establece que la enseñanza en los niveles de primaria y secundaria de zonas rurales debe ser enseñada en la lengua originaria del lugar además del español. Por lo que se genera una necesidad de recursos para la enseñanza ya que se presenta una deficiencia en la ortografía por la variedad de enseñanza de manera oral. Además se realizó una encuesta a nivel nacional [Ministerio de educación del Perú, 2013] que indica que en el país se ha incrementado el uso de las tecnologías en la educación. De manera que los alumnos podrían mejorar su rendimiento con ayuda de la tecnología, si es que esta contase con recursos computacionales adecuados, logrando así tener un impacto positivo. Por lo descrito previamente, en este proyecto se afronta el problema de la carencia de apoyo y escases de recursos en la corrección ortográfica entre los hablantes de lenguas originarias en el Perú mediante la implementación un corrector ortográfico, utilizable desde una aplicación web. Para tener acceso al corrector y conseguir mayor difusión, se desarrollan servicios que son consumidos en la aplicación web, en la cual se integra el corrector ortográfico y un módulo de sugerencias al usuario.Tesi

    Elírások automatikus detektálása és javítása radiológiai leletek szövegében

    Get PDF
    A radiológiai leletezés közben gyakran előfordulhatnak szövegbéli hibák, melyek kézi javításra szorulnak. Ez időt von el a radiológustól, valamint a hibák sikertelen felismerése nyomán rontja a leletek minőségét és utólagos gépi feldolgozhatóságát is. Cikkünkben magyar nyelvű gerincleletek elírásainak automatikus kijavításával foglalkozunk. Ismertetjük az általunk felhasznált módszereket, és megmutatjuk, hogy az elírások automatikus javítása az értelmezést is nagymértékben javítja. Módszerünket 882 valós lelet kézi hibajavításával vetjük össze

    Text Mining to Support Knowledge Discovery from Electronic Health Records

    Get PDF
    The use of electronic health records (EHRs) has grown rapidly in the last decade. The EHRs are no longer being used only for storing information for clinical purposes but the secondary use of the data in the healthcare research has increased rapidly as well. The data in EHRs are recorded in a structured manner as much as possible, however, many EHRs often also contain large amount of unstructured free‐text. The structured and unstructured clinical data presents several challenges to the researchers since the data are not primarily collected for research purposes. The issues related to structured data can be missing data, noise, and inconsistency. The unstructured free-text is even more challenging to use since they often have no fixed format and may vary from clinician to clinician and from database to database. Text and data mining techniques are increasingly being used to effectively and efficiently process large EHRs for research purposes. Most of the me
    corecore