122 research outputs found
Extreme multi-label deep neural classification of Spanish health records according to the International Classification of Diseases
111 p.Este trabajo trata sobre la minería de textos clínicos, un campo del Procesamiento del Lenguaje Natural aplicado al dominio biomédico. El objetivo es automatizar la tarea de codificación médica. Los registros electrónicos de salud (EHR) son documentos que contienen información clínica sobre la salud de unpaciente. Los diagnósticos y procedimientos médicos plasmados en la Historia Clínica Electrónica están codificados con respecto a la Clasificación Internacional de Enfermedades (CIE). De hecho, la CIE es la base para identificar estadísticas de salud internacionales y el estándar para informar enfermedades y condiciones de salud. Desde la perspectiva del aprendizaje automático, el objetivo es resolver un problema extremo de clasificación de texto de múltiples etiquetas, ya que a cada registro de salud se le asignan múltiples códigos ICD de un conjunto de más de 70 000 términos de diagnóstico. Una cantidad importante de recursos se dedican a la codificación médica, una laboriosa tarea que actualmente se realiza de forma manual. Los EHR son narraciones extensas, y los codificadores médicos revisan los registros escritos por los médicos y asignan los códigos ICD correspondientes. Los textos son técnicos ya que los médicos emplean una jerga médica especializada, aunque rica en abreviaturas, acrónimos y errores ortográficos, ya que los médicos documentan los registros mientras realizan la práctica clínica real. Paraabordar la clasificación automática de registros de salud, investigamos y desarrollamos un conjunto de técnicas de clasificación de texto de aprendizaje profundo
Robust input representations for low-resource information extraction
Recent advances in the field of natural language processing were achieved with deep learning models. This led to a wide range of new research questions concerning the stability of such large-scale systems and their applicability beyond well-studied tasks and datasets, such as information extraction in non-standard domains and languages, in particular, in low-resource environments. In this work, we address these challenges and make important contributions across fields such as representation learning and transfer learning by proposing novel model architectures and training strategies to overcome existing limitations, including a lack of training resources, domain mismatches and language barriers. In particular, we propose solutions to close the domain gap between representation models by, e.g., domain-adaptive pre-training or our novel meta-embedding architecture for creating a joint representations of multiple embedding methods. Our broad set of experiments demonstrates state-of-the-art performance of our methods for various sequence tagging and classification tasks and highlight their robustness in challenging low-resource settings across languages and domains.Die jüngsten Fortschritte auf dem Gebiet der Verarbeitung natürlicher Sprache wurden mit Deep-Learning-Modellen erzielt. Dies führte zu einer Vielzahl neuer Forschungsfragen bezüglich der Stabilität solcher großen Systeme und ihrer Anwendbarkeit über gut untersuchte Aufgaben und Datensätze hinaus, wie z. B. die Informationsextraktion für Nicht-Standardsprachen, aber auch Textdomänen und Aufgaben, für die selbst im Englischen nur wenige Trainingsdaten zur Verfügung stehen. In dieser Arbeit gehen wir auf diese Herausforderungen ein und leisten wichtige Beiträge in Bereichen wie Repräsentationslernen und Transferlernen, indem wir neuartige Modellarchitekturen und Trainingsstrategien vorschlagen, um bestehende Beschränkungen zu überwinden, darunter fehlende Trainingsressourcen, ungesehene Domänen und Sprachbarrieren. Insbesondere schlagen wir Lösungen vor, um die Domänenlücke zwischen Repräsentationsmodellen zu schließen, z.B. durch domänenadaptives Vortrainieren oder unsere neuartige Meta-Embedding-Architektur zur Erstellung einer gemeinsamen Repräsentation mehrerer Embeddingmethoden. Unsere umfassende Evaluierung demonstriert die Leistungsfähigkeit unserer Methoden für verschiedene Klassifizierungsaufgaben auf Word und Satzebene und unterstreicht ihre Robustheit in anspruchsvollen, ressourcenarmen Umgebungen in verschiedenen Sprachen und Domänen
Using machine learning for automated de-identification and clinical coding of free text data in electronic medical records
The widespread adoption of Electronic Medical Records (EMRs) in hospitals continues to increase the amount of patient data that are digitally stored. Although the primary use of the EMR is to support patient care by making all relevant information accessible, governments and health organisations are looking for ways to unleash the potential of these data for secondary purposes, including clinical research, disease surveillance and automation of healthcare processes and workflows.
EMRs include large quantities of free text documents that contain valuable information. The greatest challenges in using the free text data in EMRs include the removal of personally identifiable information and the extraction of relevant information for specific tasks such as clinical coding. Machine learning-based automated approaches can potentially address these challenges.
This thesis aims to explore and improve the performance of machine learning models for automated de-identification and clinical coding of free text data in EMRs, as captured in hospital discharge summaries, and facilitate the applications of these approaches in real-world use cases. It does so by 1) implementing an end-to-end de-identification framework using an ensemble of deep learning models; 2) developing a web-based system for de-identification of free text (DEFT) with an interactive learning loop; 3) proposing and implementing a hierarchical label-wise attention transformer model (HiLAT) for explainable International Classification of Diseases (ICD) coding; and 4) investigating the use of extreme multi-label long text transformer-based models for automated ICD coding.
The key findings include: 1) An end-to-end framework using an ensemble of deep learning base-models achieved excellent performance on the de-identification task. 2) A new web-based de-identification software system (DEFT) can be readily and easily adopted by data custodians and researchers to perform de-identification of free text in EMRs. 3) A novel domain-specific transformer-based model (HiLAT) achieved state-of-the-art (SOTA) results for predicting ICD codes on a Medical Information Mart for Intensive Care (MIMIC-III) dataset comprising the discharge summaries (n=12,808) that are coded with at least one of the most 50 frequent diagnosis and procedure codes. In addition, the label-wise attention scores for the tokens in the discharge summary presented a potential explainability tool for checking the face validity of ICD code predictions. 4) An optimised transformer-based model, PLM-ICD, achieved the latest SOTA results for ICD coding on all the discharge summaries of the MIMIC-III dataset (n=59,652). The segmentation method, which split the long text consecutively into multiple small chunks, addressed the problem of applying transformer-based models to long text datasets. However, using transformer-based models on extremely large label sets needs further research.
These findings demonstrate that the de-identification and clinical coding tasks can benefit from the application of machine learning approaches, present practical tools for implementing these approaches, and highlight priorities for further research
Biomedical entities recognition in Spanish combining word embeddings
El reconocimiento de entidades con nombre (NER) es una tarea importante en el campo del
Procesamiento del Lenguaje Natural que se utiliza para extraer conocimiento significativo de los
documentos textuales. El objetivo de NER es identificar trozos de texto que se refieran a entidades
específicas.
En esta tesis pretendemos abordar la tarea de NER en el dominio biomédico y en español. En este
dominio las entidades pueden referirse a nombres de fármacos, síntomas y enfermedades y ofrecen un
conocimiento valioso a los expertos sanitarios. Para ello, proponemos un modelo basado en redes
neuronales y empleamos una combinación de word embeddings. Además, nosotros generamos unos
nuevos embeddings específicos del dominio y del idioma para comprobar su eficacia. Finalmente,
demostramos que la combinación de diferentes word embeddings como entrada a la red neuronal
mejora los resultados del estado de la cuestión en los escenarios aplicados.Named Entity Recognition (NER) is an important task in the field of Natural Language Processing that is
used to extract meaningful knowledge from textual documents. The goal of NER is to identify text
fragments that refer to specific entities.
In this thesis we aim to address the task of NER in the Spanish biomedical domain. In this domain
entities can refer to drug, symptom and disease names and offer valuable knowledge to health experts.
For this purpose, we propose a model based on neural networks and employ a combination of word
embeddings. In addition, we generate new domain- and language-specific embeddings to test their
effectiveness. Finally, we show that the combination of different word embeddings as input to the neural
network improves the state-of-the-art results in the applied scenarios.Tesis Univ. Jaén. Departamento de Informática. Leída el 22 abril de 2021
Contributions to information extraction for spanish written biomedical text
285 p.Healthcare practice and clinical research produce vast amounts of digitised, unstructured data in multiple languages that are currently underexploited, despite their potential applications in improving healthcare experiences, supporting trainee education, or enabling biomedical research, for example. To automatically transform those contents into relevant, structured information, advanced Natural Language Processing (NLP) mechanisms are required. In NLP, this task is known as Information Extraction. Our work takes place within this growing field of clinical NLP for the Spanish language, as we tackle three distinct problems. First, we compare several supervised machine learning approaches to the problem of sensitive data detection and classification. Specifically, we study the different approaches and their transferability in two corpora, one synthetic and the other authentic. Second, we present and evaluate UMLSmapper, a knowledge-intensive system for biomedical term identification based on the UMLS Metathesaurus. This system recognises and codifies terms without relying on annotated data nor external Named Entity Recognition tools. Although technically naive, it performs on par with more evolved systems, and does not exhibit a considerable deviation from other approaches that rely on oracle terms. Finally, we present and exploit a new corpus of real health records manually annotated with negation and uncertainty information: NUBes. This corpus is the basis for two sets of experiments, one on cue andscope detection, and the other on assertion classification. Throughout the thesis, we apply and compare techniques of varying levels of sophistication and novelty, which reflects the rapid advancement of the field
Towards a system of concepts for Family Medicine. Multilingual indexing in General Practice/ Family Medicine in the era of Semantic Web
UNIVERSITY OF LIÈGE, BELGIUM
Executive Summary
Faculty of Medicine
Département Universitaire de Médecine Générale.
Unité de recherche Soins Primaires et Santé
Doctor in biomedical sciences
Towards a system of concepts for Family Medicine.
Multilingual indexing in General Practice/ Family Medicine in the era
of SemanticWeb
by Dr. Marc JAMOULLE
Introduction
This thesis is about giving visibility to the often overlooked work of family
physicians and consequently, is about grey literature in General Practice
and Family Medicine (GP/FM). It often seems that conference organizers
do not think of GP/FM as a knowledge-producing discipline that deserves
active dissemination. A conference is organized, but not much is done with
the knowledge shared at these meetings. In turn, the knowledge cannot be
reused or reapplied. This these is also about indexing. To find knowledge
back, indexing is mandatory. We must prepare tools that will automatically
index the thousands of abstracts that family doctors produce each year in
various languages. And finally this work is about semantics1. It is an introduction
to health terminologies, ontologies, semantic data, and linked
open data. All are expressions of the next step: Semantic Web for health
care data. Concepts, units of thought expressed by terms, will be our target
and must have the ability to be expressed in multiple languages. In turn,
three areas of knowledge are at stake in this study: (i) Family Medicine as a
pillar of primary health care, (ii) computational linguistics, and (iii) health
information systems.
Aim
• To identify knowledge produced by General practitioners (GPs) by
improving annotation of grey literature in Primary Health Care
• To propose an experimental indexing system, acting as draft for a
standardized table of content of GP/GM
• To improve the searchability of repositories for grey literature in GP/GM.
1For specific terms, see the Glossary page 257
x
Methods
The first step aimed to design the taxonomy by identifying relevant concepts
in a compiled corpus of GP/FM texts. We have studied the concepts
identified in nearly two thousand communications of GPs during
conferences. The relevant concepts belong to the fields that are focusing
on GP/FM activities (e.g. teaching, ethics, management or environmental
hazard issues).
The second step was the development of an on-line, multilingual, terminological
resource for each category of the resulting taxonomy, named
Q-Codes. We have designed this terminology in the form of a lightweight
ontology, accessible on-line for readers and ready for use by computers of
the semantic web. It is also fit for the Linked Open Data universe.
Results
We propose 182 Q-Codes in an on-line multilingual database (10 languages)
(www.hetop.eu/Q) acting each as a filter for Medline. Q-Codes are also available
under the form of Unique Resource Identifiers (URIs) and are exportable
in Web Ontology Language (OWL). The International Classification of Primary
Care (ICPC) is linked to Q-Codes in order to form the Core Content
Classification in General Practice/Family Medicine (3CGP). So far, 3CGP is
in use by humans in pedagogy, in bibliographic studies, in indexing congresses,
master theses and other forms of grey literature in GP/FM. Use by
computers is experimented in automatic classifiers, annotators and natural
language processing.
Discussion
To the best of our knowledge, this is the first attempt to expand the ICPC
coding system with an extension for family physician contextual issues,
thus covering non-clinical content of practice. It remains to be proven that
our proposed terminology will help in dealing with more complex systems,
such as MeSH, to support information storage and retrieval activities.
However, this exercise is proposed as a first step in the creation of an ontology
of GP/FM and as an opening to the complex world of Semantic Web
technologies.
Conclusion
We expect that the creation of this terminological resource for indexing abstracts
and for facilitating Medline searches for general practitioners, researchers
and students in medicine will reduce loss of knowledge in the
domain of GP/FM. In addition, through better indexing of the grey literature
(congress abstracts, master’s and doctoral theses), we hope to enhance
the accessibility of research results and give visibility to the invisible work
of family physicians
Proceedings of the Seventh Italian Conference on Computational Linguistics CLiC-it 2020
On behalf of the Program Committee, a very warm welcome to the Seventh Italian Conference on Computational Linguistics (CLiC-it 2020). This edition of the conference is held in Bologna and organised by the University of Bologna. The CLiC-it conference series is an initiative of the Italian Association for Computational Linguistics (AILC) which, after six years of activity, has clearly established itself as the premier national forum for research and development in the fields of Computational Linguistics and Natural Language Processing, where leading researchers and practitioners from academia and industry meet to share their research results, experiences, and challenges
- …