532 research outputs found
A method for encoding clinical datasets with SNOMED CT
<p>Abstract</p> <p>Background</p> <p>Over the past decade there has been a growing body of literature on how the Systematised Nomenclature of Medicine Clinical Terms (SNOMED CT) can be implemented and used in different clinical settings. Yet, for those charged with incorporating SNOMED CT into their organisation's clinical applications and vocabulary systems, there are few detailed encoding instructions and examples available to show how this can be done and the issues involved. This paper describes a heuristic method that can be used to encode clinical terms in SNOMED CT and an illustration of how it was applied to encode an existing palliative care dataset.</p> <p>Methods</p> <p>The encoding process involves: identifying input data items; cleaning the data items; encoding the cleaned data items; and exporting the encoded terms as output term sets. Four outputs are produced: the SNOMED CT reference set; interface terminology set; SNOMED CT extension set and unencodeable term set.</p> <p>Results</p> <p>The original palliative care database contained 211 data elements, 145 coded values and 37,248 free text values. We were able to encode ~84% of the terms, another ~8% require further encoding and verification while terms that had a frequency of fewer than five were not encoded (~7%).</p> <p>Conclusions</p> <p>From the pilot, it would seem our SNOMED CT encoding method has the potential to become a general purpose terminology encoding approach that can be used in different clinical systems.</p
Ontology-Based Clinical Information Extraction Using SNOMED CT
Extracting and encoding clinical information captured in unstructured clinical documents with standard medical terminologies is vital to enable secondary use of clinical data from practice. SNOMED CT is the most comprehensive medical ontology with broad types of concepts and detailed relationships and it has been widely used for many clinical applications. However, few studies have investigated the use of SNOMED CT in clinical information extraction.
In this dissertation research, we developed a fine-grained information model based on the SNOMED CT and built novel information extraction systems to recognize clinical entities and identify their relations, as well as to encode them to SNOMED CT concepts. Our evaluation shows that such ontology-based information extraction systems using SNOMED CT could achieve state-of-the-art performance, indicating its potential in clinical natural language processing
Knowledge Graph Embeddings for Multi-Lingual Structured Representations of Radiology Reports
The way we analyse clinical texts has undergone major changes over the last
years. The introduction of language models such as BERT led to adaptations for
the (bio)medical domain like PubMedBERT and ClinicalBERT. These models rely on
large databases of archived medical documents. While performing well in terms
of accuracy, both the lack of interpretability and limitations to transfer
across languages limit their use in clinical setting. We introduce a novel
light-weight graph-based embedding method specifically catering radiology
reports. It takes into account the structure and composition of the report,
while also connecting medical terms in the report through the multi-lingual
SNOMED Clinical Terms knowledge base. The resulting graph embedding uncovers
the underlying relationships among clinical terms, achieving a representation
that is better understandable for clinicians and clinically more accurate,
without reliance on large pre-training datasets. We show the use of this
embedding on two tasks namely disease classification of X-ray reports and image
classification. For disease classification our model is competitive with its
BERT-based counterparts, while being magnitudes smaller in size and training
data requirements. For image classification, we show the effectiveness of the
graph embedding leveraging cross-modal knowledge transfer and show how this
method is usable across different languages
Sampled in Pairs and Driven by Text: A New Graph Embedding Framework
In graphs with rich texts, incorporating textual information with structural
information would benefit constructing expressive graph embeddings. Among
various graph embedding models, random walk (RW)-based is one of the most
popular and successful groups. However, it is challenged by two issues when
applied on graphs with rich texts: (i) sampling efficiency: deriving from the
training objective of RW-based models (e.g., DeepWalk and node2vec), we show
that RW-based models are likely to generate large amounts of redundant training
samples due to three main drawbacks. (ii) text utilization: these models have
difficulty in dealing with zero-shot scenarios where graph embedding models
have to infer graph structures directly from texts. To solve these problems, we
propose a novel framework, namely Text-driven Graph Embedding with Pairs
Sampling (TGE-PS). TGE-PS uses Pairs Sampling (PS) to improve the sampling
strategy of RW, being able to reduce ~99% training samples while preserving
competitive performance. TGE-PS uses Text-driven Graph Embedding (TGE), an
inductive graph embedding approach, to generate node embeddings from texts.
Since each node contains rich texts, TGE is able to generate high-quality
embeddings and provide reasonable predictions on existence of links to unseen
nodes. We evaluate TGE-PS on several real-world datasets, and experiment
results demonstrate that TGE-PS produces state-of-the-art results on both
traditional and zero-shot link prediction tasks.Comment: Accepted by WWW 2019 (The World Wide Web Conference. ACM, 2019
SNOMED CT standard ontology based on the ontology for general medical science
Background: Systematized Nomenclature of Medicine—Clinical Terms (SNOMED CT, hereafter abbreviated SCT) is acomprehensive medical terminology used for standardizing the storage, retrieval, and exchange of electronic healthdata. Some efforts have been made to capture the contents of SCT as Web Ontology Language (OWL), but theseefforts have been hampered by the size and complexity of SCT.
Method: Our proposal here is to develop an upper-level ontology and to use it as the basis for defining the termsin SCT in a way that will support quality assurance of SCT, for example, by allowing consistency checks ofdefinitions and the identification and elimination of redundancies in the SCT vocabulary. Our proposed upper-levelSCT ontology (SCTO) is based on the Ontology for General Medical Science (OGMS).
Results: The SCTO is implemented in OWL 2, to support automatic inference and consistency checking. Theapproach will allow integration of SCT data with data annotated using Open Biomedical Ontologies (OBO) Foundryontologies, since the use of OGMS will ensure consistency with the Basic Formal Ontology, which is the top-levelontology of the OBO Foundry. Currently, the SCTO contains 304 classes, 28 properties, 2400 axioms, and 1555annotations. It is publicly available through the bioportal athttp://bioportal.bioontology.org/ontologies/SCTO/.
Conclusion: The resulting ontology can enhance the semantics of clinical decision support systems and semanticinteroperability among distributed electronic health records. In addition, the populated ontology can be used forthe automation of mobile health applications
Analyzing transfer learning impact in biomedical cross lingual named entity recognition and normalization
Background
The volume of biomedical literature and clinical data is growing at an exponential rate. Therefore, efficient access to data described in unstructured biomedical texts is a crucial task for the biomedical industry and research. Named Entity Recognition (NER) is the first step for information and knowledge acquisition when we deal with unstructured texts. Recent NER approaches use contextualized word representations as input for a downstream classification task. However, distributed word vectors (embeddings) are very limited in Spanish and even more for the biomedical domain.
Methods
In this work, we develop several biomedical Spanish word representations, and we introduce two Deep Learning approaches for pharmaceutical, chemical, and other biomedical entities recognition in Spanish clinical case texts and biomedical texts, one based on a Bi-STM-CRF model and the other on a BERT-based architecture.
Results
Several Spanish biomedical embeddigns together with the two deep learning models were evaluated on the PharmaCoNER and CORD-19 datasets. The PharmaCoNER dataset is composed of a set of Spanish clinical cases annotated with drugs, chemical compounds and pharmacological substances; our extended Bi-LSTM-CRF model obtains an F-score of 85.24% on entity identification and classification and the BERT model obtains an F-score of 88.80% . For the entity normalization task, the extended Bi-LSTM-CRF model achieves an F-score of 72.85% and the BERT model achieves 79.97%. The CORD-19 dataset consists of scholarly articles written in English annotated with biomedical concepts such as disorder, species, chemical or drugs, gene and protein, enzyme and anatomy. Bi-LSTM-CRF model and BERT model obtain an F-measure of 78.23% and 78.86% on entity identification and classification, respectively on the CORD-19 dataset.
Conclusion
These results prove that deep learning models with in-domain knowledge learned from large-scale datasets highly improve named entity recognition performance. Moreover, contextualized representations help to understand complexities and ambiguity inherent to biomedical texts. Embeddings based on word, concepts, senses, etc. other than those for English are required to improve NER tasks in other languages.This work was partially supported by the Research Program of the Ministry of Economy and Competitiveness - Government of Spain, (DeepEMR project TIN2017-87548-C2-1-R)
FRASIMED: a Clinical French Annotated Resource Produced through Crosslingual BERT-Based Annotation Projection
Natural language processing (NLP) applications such as named entity
recognition (NER) for low-resource corpora do not benefit from recent advances
in the development of large language models (LLMs) where there is still a need
for larger annotated datasets. This research article introduces a methodology
for generating translated versions of annotated datasets through crosslingual
annotation projection. Leveraging a language agnostic BERT-based approach, it
is an efficient solution to increase low-resource corpora with few human
efforts and by only using already available open data resources. Quantitative
and qualitative evaluations are often lacking when it comes to evaluating the
quality and effectiveness of semi-automatic data generation strategies. The
evaluation of our crosslingual annotation projection approach showed both
effectiveness and high accuracy in the resulting dataset. As a practical
application of this methodology, we present the creation of French Annotated
Resource with Semantic Information for Medical Entities Detection (FRASIMED),
an annotated corpus comprising 2'051 synthetic clinical cases in French. The
corpus is now available for researchers and practitioners to develop and refine
French natural language processing (NLP) applications in the clinical field
(https://zenodo.org/record/8355629), making it the largest open annotated
corpus with linked medical concepts in French
Integration of Neuroimaging and Microarray Datasets through Mapping and Model-Theoretic Semantic Decomposition of Unstructured Phenotypes
An approach towards heterogeneous neuroscience dataset integration is proposed that uses Natural Language Processing (NLP) and a knowledge-based phenotype organizer system (PhenOS) to link ontology-anchored terms to underlying data from each database, and then maps these terms based on a computable model of disease (SNOMED CT®). The approach was implemented using sample datasets from fMRIDC, GEO, The Whole Brain Atlas and Neuronames, and allowed for complex queries such as “List all disorders with a finding site of brain region X, and then find the semantically related references in all participating databases based on the ontological model of the disease or its anatomical and morphological attributes”. Precision of the NLP-derived coding of the unstructured phenotypes in each dataset was 88% (n = 50), and precision of the semantic mapping between these terms across datasets was 98% (n = 100). To our knowledge, this is the first example of the use of both semantic decomposition of disease relationships and hierarchical information found in ontologies to integrate heterogeneous phenotypes across clinical and molecular datasets
- …