5,664 research outputs found
Advancing Italian Biomedical Information Extraction with Large Language Models: Methodological Insights and Multicenter Practical Application
The introduction of computerized medical records in hospitals has reduced
burdensome operations like manual writing and information fetching. However,
the data contained in medical records are still far underutilized, primarily
because extracting them from unstructured textual medical records takes time
and effort. Information Extraction, a subfield of Natural Language Processing,
can help clinical practitioners overcome this limitation, using automated
text-mining pipelines. In this work, we created the first Italian
neuropsychiatric Named Entity Recognition dataset, PsyNIT, and used it to
develop a Large Language Model for this task. Moreover, we conducted several
experiments with three external independent datasets to implement an effective
multicenter model, with overall F1-score 84.77%, Precision 83.16%, Recall
86.44%. The lessons learned are: (i) the crucial role of a consistent
annotation process and (ii) a fine-tuning strategy that combines classical
methods with a "few-shot" approach. This allowed us to establish methodological
guidelines that pave the way for future implementations in this field and allow
Italian hospitals to tap into important research opportunities
Natural Language Processing of Clinical Notes on Chronic Diseases: Systematic Review
Novel approaches that complement and go beyond evidence-based medicine are required in the domain of chronic diseases, given the growing incidence of such conditions on the worldwide population. A promising avenue is the secondary use of electronic health records (EHRs), where patient data are analyzed to conduct clinical and translational research. Methods based on machine learning to process EHRs are resulting in improved understanding of patient clinical trajectories and chronic disease risk prediction, creating a unique opportunity to derive previously unknown clinical insights. However, a wealth of clinical histories remains locked behind clinical narratives in free-form text. Consequently, unlocking the full potential of EHR data is contingent on the development of natural language processing (NLP) methods to automatically transform clinical text into structured clinical data that can guide clinical decisions and potentially delay or prevent disease onset
The Revival of the Notes Field: Leveraging the Unstructured Content in Electronic Health Records
Problem: Clinical practice requires the production of a time- and resource-consuming great amount of notes. They contain relevant information, but their secondary use is almost impossible, due to their unstructured nature. Researchers are trying to address this problems, with traditional and promising novel techniques. Application in real hospital settings seems not to be possible yet, though, both because of relatively small and dirty dataset, and for the lack of language-specific pre-trained models.Aim: Our aim is to demonstrate the potential of the above techniques, but also raise awareness of the still open challenges that the scientific communities of IT and medical practitioners must jointly address to realize the full potential of unstructured content that is daily produced and digitized in hospital settings, both to improve its data quality and leverage the insights from data-driven predictive models.Methods: To this extent, we present a narrative literature review of the most recent and relevant contributions to leverage the application of Natural Language Processing techniques to the free-text content electronic patient records. In particular, we focused on four selected application domains, namely: data quality, information extraction, sentiment analysis and predictive models, and automated patient cohort selection. Then, we will present a few empirical studies that we undertook at a major teaching hospital specializing in musculoskeletal diseases.Results: We provide the reader with some simple and affordable pipelines, which demonstrate the feasibility of reaching literature performance levels with a single institution non-English dataset. In such a way, we bridged literature and real world needs, performing a step further toward the revival of notes fields
Automated Coding of Under-Studied Medical Concept Domains: Linking Physical Activity Reports to the International Classification of Functioning, Disability, and Health
Linking clinical narratives to standardized vocabularies and coding systems
is a key component of unlocking the information in medical text for analysis.
However, many domains of medical concepts lack well-developed terminologies
that can support effective coding of medical text. We present a framework for
developing natural language processing (NLP) technologies for automated coding
of under-studied types of medical information, and demonstrate its
applicability via a case study on physical mobility function. Mobility is a
component of many health measures, from post-acute care and surgical outcomes
to chronic frailty and disability, and is coded in the International
Classification of Functioning, Disability, and Health (ICF). However, mobility
and other types of functional activity remain under-studied in medical
informatics, and neither the ICF nor commonly-used medical terminologies
capture functional status terminology in practice. We investigated two
data-driven paradigms, classification and candidate selection, to link
narrative observations of mobility to standardized ICF codes, using a dataset
of clinical narratives from physical therapy encounters. Recent advances in
language modeling and word embedding were used as features for established
machine learning models and a novel deep learning approach, achieving a macro
F-1 score of 84% on linking mobility activity reports to ICF codes. Both
classification and candidate selection approaches present distinct strengths
for automated coding in under-studied domains, and we highlight that the
combination of (i) a small annotated data set; (ii) expert definitions of codes
of interest; and (iii) a representative text corpus is sufficient to produce
high-performing automated coding systems. This study has implications for the
ongoing growth of NLP tools for a variety of specialized applications in
clinical care and research.Comment: Updated final version, published in Frontiers in Digital Health,
https://doi.org/10.3389/fdgth.2021.620828. 34 pages (23 text + 11
references); 9 figures, 2 table
Unsupervised extraction, labelling and clustering of segments from clinical notes
This work is motivated by the scarcity of tools for accurate, unsupervised
information extraction from unstructured clinical notes in computationally
underrepresented languages, such as Czech. We introduce a stepping stone to a
broad array of downstream tasks such as summarisation or integration of
individual patient records, extraction of structured information for national
cancer registry reporting or building of semi-structured semantic patient
representations for computing patient embeddings. More specifically, we present
a method for unsupervised extraction of semantically-labelled textual segments
from clinical notes and test it out on a dataset of Czech breast cancer
patients, provided by Masaryk Memorial Cancer Institute (the largest Czech
hospital specialising in oncology). Our goal was to extract, classify (i.e.
label) and cluster segments of the free-text notes that correspond to specific
clinical features (e.g., family background, comorbidities or toxicities). The
presented results demonstrate the practical relevance of the proposed approach
for building more sophisticated extraction and analytical pipelines deployed on
Czech clinical notes.Comment: To be published at the IEEE BIBM 2022 conferenc
Mapping of electronic health records in Spanish to the unified medical language system metathesaurus
[EN] This work presents a preliminary approach to annotate Spanish electronic health records with concepts of the Unified Medical Language System Metathesaurus. The prototype uses Apache Lucene R to index the Metathesaurus and generate mapping candidates from input text. In addition, it relies on UKB to resolve ambiguities. The tool has been evaluated by measuring its agreement with MetaMap in two English-Spanish parallel corpora, one consisting of titles and abstracts of papers in the clinical domain, and the other of real electronic health record excerpts.[EU] Lan honetan, espainieraz idatzitako mediku-txosten elektronikoak Unified Medical Languge System Metathesaurus deituriko terminologia biomedikoarekin etiketatzeko lehen urratsak eman dira. Prototipoak Apache Lucene R erabiltzen du Metathesaurus-a indexatu eta mapatze hautagaiak sortzeko. Horrez gain, anbiguotasunak UKB bidez ebazten ditu. Ebaluazioari dagokionez, prototipoaren eta MetaMap-en arteko adostasuna neurtu da bi ingelera-gaztelania corpus paralelotan. Corpusetako bat artikulu zientifikoetako izenburu eta laburpenez osatutako dago. Beste corpusa mediku-txosten pasarte batzuez dago osatuta
tieval: An Evaluation Framework for Temporal Information Extraction Systems
Temporal information extraction (TIE) has attracted a great deal of interest
over the last two decades, leading to the development of a significant number
of datasets. Despite its benefits, having access to a large volume of corpora
makes it difficult when it comes to benchmark TIE systems. On the one hand,
different datasets have different annotation schemes, thus hindering the
comparison between competitors across different corpora. On the other hand, the
fact that each corpus is commonly disseminated in a different format requires a
considerable engineering effort for a researcher/practitioner to develop
parsers for all of them. This constraint forces researchers to select a limited
amount of datasets to evaluate their systems which consequently limits the
comparability of the systems. Yet another obstacle that hinders the
comparability of the TIE systems is the evaluation metric employed. While most
research works adopt traditional metrics such as precision, recall, and ,
a few others prefer temporal awareness -- a metric tailored to be more
comprehensive on the evaluation of temporal systems. Although the reason for
the absence of temporal awareness in the evaluation of most systems is not
clear, one of the factors that certainly weights this decision is the necessity
to implement the temporal closure algorithm in order to compute temporal
awareness, which is not straightforward to implement neither is currently
easily available. All in all, these problems have limited the fair comparison
between approaches and consequently, the development of temporal extraction
systems. To mitigate these problems, we have developed tieval, a Python library
that provides a concise interface for importing different corpora and
facilitates system evaluation. In this paper, we present the first public
release of tieval and highlight its most relevant features.Comment: 10 page
Clinical Natural Language Processing in languages other than English: opportunities and challenges
Background: Natural language processing applied to clinical text or aimed at a clinical outcome has been thriving in recent years. This paper offers the first broad overview of clinical Natural Language Processing (NLP) for languages other than English. Recent studies are summarized to offer insights and outline opportunities in this area. Main Body We envision three groups of intended readers: (1) NLP researchers leveraging experience gained in other languages, (2) NLP researchers faced with establishing clinical text processing in a language other than English, and (3) clinical informatics researchers and practitioners looking for resources in their languages in order to apply NLP techniques and tools to clinical practice and/or investigation. We review work in clinical NLP in languages other than English. We classify these studies into three groups: (i) studies describing the development of new NLP systems or components de novo, (ii) studies describing the adaptation of NLP architectures developed for English to another language, and (iii) studies focusing on a particular clinical application. Conclusion: We show the advantages and drawbacks of each method, and highlight the appropriate application context. Finally, we identify major challenges and opportunities that will affect the impact of NLP on clinical practice and public health studies in a context that encompasses English as well as other languages
- …