2,702 research outputs found

    Task 2: ShARe/CLEF eHealth evaluation lab 2014

    Get PDF
    This paper reports on Task 2 of the 2014 ShARe/CLEF eHealth evaluation lab which extended Task 1 of the 2013 ShARe/CLEF eHealth evaluation lab by focusing on template lling of disorder attributes. The task was comprised of two subtasks: attribute normalization (task 2a) and cue identication (task 2b).We instructed participants to develop a system which either kept or updated a default attribute value for each task. Participant systems were evaluated against a blind reference standard of 133 discharge summaries using Accuracy (task 2a) and F-score (task 2b). In total, ten teams participated in task 2a, and three teams in task 2b. For task 2a and 2b, the HITACHI team systems (run 2) had the highest performances, with an overall average average accuracy of 0.868 and F1-score (strict) of 0.676, respectively

    SemClinBr -- a multi institutional and multi specialty semantically annotated corpus for Portuguese clinical NLP tasks

    Full text link
    The high volume of research focusing on extracting patient's information from electronic health records (EHR) has led to an increase in the demand for annotated corpora, which are a very valuable resource for both the development and evaluation of natural language processing (NLP) algorithms. The absence of a multi-purpose clinical corpus outside the scope of the English language, especially in Brazilian Portuguese, is glaring and severely impacts scientific progress in the biomedical NLP field. In this study, we developed a semantically annotated corpus using clinical texts from multiple medical specialties, document types, and institutions. We present the following: (1) a survey listing common aspects and lessons learned from previous research, (2) a fine-grained annotation schema which could be replicated and guide other annotation initiatives, (3) a web-based annotation tool focusing on an annotation suggestion feature, and (4) both intrinsic and extrinsic evaluation of the annotations. The result of this work is the SemClinBr, a corpus that has 1,000 clinical notes, labeled with 65,117 entities and 11,263 relations, and can support a variety of clinical NLP tasks and boost the EHR's secondary use for the Portuguese language

    Overview of the ShARe/CLEF eHealth evaluation lab 2013

    Get PDF
    Discharge summaries and other free-text reports in healthcare transfer information between working shifts and geographic locations. Patients are likely to have difficulties in understanding their content, because of their medical jargon, non-standard abbreviations, and ward-specific idioms. This paper reports on an evaluation lab with an aim to support the continuum of care by developing methods and resources that make clinical reports in English easier to understand for patients, and which helps them in finding information related to their condition. This ShARe/CLEFeHealth2013 lab offered student mentoring and shared tasks: identification and normalisation of disorders (1a and 1b) and normalisation of abbreviations and acronyms (2) in clinical reports with respect to terminology standards in healthcare as well as information retrieval (3) to address questions patients may have when reading clinical reports. The focus on patients' information needs as opposed to the specialised information needs of physicians and other healthcare workers was the main feature of the lab distinguishing it from previous shared tasks. De-identied clinical reports for the three tasks were from US intensive care and originated from the MIMIC II database. Other text documents for Task 3 were from the Internet and originated from the Khresmoi project. Task 1 annotations originated from the ShARe annotations. For Tasks 2 and 3, new annotations, queries, and relevance assessments were created. 64, 56, and 55 people registered their interest in Tasks 1, 2, and 3, respectively. 34 unique teams (3 members per team on average) participated with 22, 17, 5, and 9 teams in Tasks 1a, 1b, 2 and 3, respectively. The teams were from Australia, China, France, India, Ireland, Republic of Korea, Spain, UK, and USA. Some teams developed and used additional annotations, but this strategy contributed to the system performance only in Task 2. The best systems had the F1 score of 0.75 in Task 1a; Accuracies of 0.59 and 0.72 in Tasks 1b and 2; and Precision at 10 of 0.52 in Task 3. The results demonstrate the substantial community interest and capabilities of these systems in making clinical reports easier to understand for patients. The organisers have made data and tools available for future research and development

    Refining manual annotation effort of acoustic data to estimate bird species richness and composition: The role of duration, intensity, and time

    Get PDF
    Manually annotating audio files for bird species richness estimation or machine learning validation is a time-intensive task. A premium is placed on the subselection of files that will maximize the efficiency of unique additional species identified, to be used for future analyses. Using acoustic data collected in 17 plots, we created 60 subsetting scenarios across three gradients: intensity (minutes in an hour), day phase (dawn, morning, or both), and duration (number of days) for manual annotation. We analyzed the effect of these variables on observed bird species richness and assemblage composition at both the local and entire study area scale. For reference, results were also compared to richness and composition estimated by the traditional point count method. Intensity, day phase, and duration all affected observed richness in decreasing respective order. These variables also significantly affected observed assemblage composition (in the same order of effect size), but only the day phase produced compositional dissimilarity that was due to phenological traits of individual bird species, rather than differences in species richness. All annotation scenarios requiring equal sampling effort to point counts yielded higher species richness than the point count method. Our results show that a great majority of species can be obtained by annotating files at high sampling intensities (every 3 or 6 min) in the morning period (post-dawn) over a duration of two days. Depending on a study's aim, different subsetting parameters will produce different assemblage compositions, potentially omitting rare or crepuscular species, species representing additional functional groups and natural history guilds, or species of higher conservation concern. We do not recommend one particular subsetting regime for all research objectives, but rather present multiple scenarios for researchers to understand how intensity, day phase, and duration interact to identify the best subsetting regime for one's particular research interests

    Characterization of patients with idiopathic normal pressure hydrocephalus using natural language processing within an electronic healthcare record system

    Get PDF
    OBJECTIVE: Idiopathic normal pressure hydrocephalus (iNPH) is an underdiagnosed, progressive, and disabling condition. Early treatment is associated with better outcomes and improved quality of life. In this paper, the authors aimed to identify features associated with patients with iNPH using natural language processing (NLP) to characterize this cohort, with the intention to later target the development of artificial intelligence–driven tools for early detection. / METHODS: The electronic health records of patients with shunt-responsive iNPH were retrospectively reviewed using an NLP algorithm. Participants were selected from a prospectively maintained single-center database of patients undergoing CSF diversion for probable iNPH (March 2008–July 2020). Analysis was conducted on preoperative health records including clinic letters, referrals, and radiology reports accessed through CogStack. Clinical features were extracted from these records as SNOMED CT (Systematized Nomenclature of Medicine Clinical Terms) concepts using a named entity recognition machine learning model. In the first phase, a base model was generated using unsupervised training on 1 million electronic health records and supervised training with 500 double-annotated documents. The model was fine-tuned to improve accuracy using 300 records from patients with iNPH double annotated by two blinded assessors. Thematic analysis of the concepts identified by the machine learning algorithm was performed, and the frequency and timing of terms were analyzed to describe this patient group. / RESULTS: In total, 293 eligible patients responsive to CSF diversion were identified. The median age at CSF diversion was 75 years, with a male predominance (69% male). The algorithm performed with a high degree of precision and recall (F1 score 0.92). Thematic analysis revealed the most frequently documented symptoms related to mobility, cognitive impairment, and falls or balance. The most frequent comorbidities were related to cardiovascular and hematological problems. / CONCLUSIONS: This model demonstrates accurate, automated recognition of iNPH features from medical records. Opportunities for translation include detecting patients with undiagnosed iNPH from primary care records, with the aim to ultimately improve outcomes for these patients through artificial intelligence–driven early detection of iNPH and prompt treatment

    Clinical Natural Language Processing in languages other than English: opportunities and challenges

    Get PDF
    Background: Natural language processing applied to clinical text or aimed at a clinical outcome has been thriving in recent years. This paper offers the first broad overview of clinical Natural Language Processing (NLP) for languages other than English. Recent studies are summarized to offer insights and outline opportunities in this area. Main Body We envision three groups of intended readers: (1) NLP researchers leveraging experience gained in other languages, (2) NLP researchers faced with establishing clinical text processing in a language other than English, and (3) clinical informatics researchers and practitioners looking for resources in their languages in order to apply NLP techniques and tools to clinical practice and/or investigation. We review work in clinical NLP in languages other than English. We classify these studies into three groups: (i) studies describing the development of new NLP systems or components de novo, (ii) studies describing the adaptation of NLP architectures developed for English to another language, and (iii) studies focusing on a particular clinical application. Conclusion: We show the advantages and drawbacks of each method, and highlight the appropriate application context. Finally, we identify major challenges and opportunities that will affect the impact of NLP on clinical practice and public health studies in a context that encompasses English as well as other languages

    Machine Learning and Clinical Text. Supporting Health Information Flow

    Get PDF
    Fluent health information flow is critical for clinical decision-making. However, a considerable part of this information is free-form text and inabilities to utilize it create risks to patient safety and cost-­effective hospital administration. Methods for automated processing of clinical text are emerging. The aim in this doctoral dissertation is to study machine learning and clinical text in order to support health information flow.First, by analyzing the content of authentic patient records, the aim is to specify clinical needs in order to guide the development of machine learning applications.The contributions are a model of the ideal information flow,a model of the problems and challenges in reality, and a road map for the technology development. Second, by developing applications for practical cases,the aim is to concretize ways to support health information flow. Altogether five machine learning applications for three practical cases are described: The first two applications are binary classification and regression related to the practical case of topic labeling and relevance ranking.The third and fourth application are supervised and unsupervised multi-class classification for the practical case of topic segmentation and labeling.These four applications are tested with Finnish intensive care patient records.The fifth application is multi-label classification for the practical task of diagnosis coding. It is tested with English radiology reports.The performance of all these applications is promising. Third, the aim is to study how the quality of machine learning applications can be reliably evaluated.The associations between performance evaluation measures and methods are addressed,and a new hold-out method is introduced.This method contributes not only to processing time but also to the evaluation diversity and quality. The main conclusion is that developing machine learning applications for text requires interdisciplinary, international collaboration. Practical cases are very different, and hence the development must begin from genuine user needs and domain expertise. The technological expertise must cover linguistics,machine learning, and information systems. Finally, the methods must be evaluated both statistically and through authentic user-feedback.Siirretty Doriast

    Evaluation of Natural Language Processing for the Identification of Crohn Disease-Related Variables in Spanish Electronic Health Records:A Validation Study for the PREMONITION-CD Project

    Get PDF
    Background: The exploration of clinically relevant information in the free text of electronic health records (EHRs) holds the potential to positively impact clinical practice as well as knowledge regarding Crohn disease (CD), an inflammatory bowel disease that may affect any segment of the gastrointestinal tract. The EHRead technology, a clinical natural language processing (cNLP) system, was designed to detect and extract clinical information from narratives in the clinical notes contained in EHRs. Objective: The aim of this study is to validate the performance of the EHRead technology in identifying information of patients with CD. Methods: We used the EHRead technology to explore and extract CD-related clinical information from EHRs. To validate this tool, we compared the output of the EHRead technology with a manually curated gold standard to assess the quality of our cNLP system in detecting records containing any reference to CD and its related variables. Results: The validation metrics for the main variable (CD) were a precision of 0.88, a recall of 0.98, and an F1 score of 0.93. Regarding the secondary variables, we obtained a precision of 0.91, a recall of 0.71, and an F1 score of 0.80 for CD flare, while for the variable vedolizumab (treatment), a precision, recall, and F1 score of 0.86, 0.94, and 0.90 were obtained, respectively. Conclusions: This evaluation demonstrates the ability of the EHRead technology to identify patients with CD and their related variables from the free text of EHRs. To the best of our knowledge, this study is the first to use a cNLP system for the identification of CD in EHRs written in Spanish. © 2022 JMIR Medical Informatics. All rights reserved
    corecore