46 research outputs found

    Aligning an interface terminology to the Logical Observation Identifiers Names and Codes (LOINC((R)))

    Get PDF
    OBJECTIVE: Our study consists in aligning the interface terminology of the Bordeaux university hospital (TLAB) to the Logical Observation Identifiers Names and Codes (LOINC). The objective was to facilitate the shared and integrated use of biological results with other health information systems. MATERIALS AND METHODS: We used an innovative approach based on a decomposition and re-composition of LOINC concepts according to the transversal relations that may be described between LOINC concepts and their definitional attributes. TLAB entities were first anchored to LOINC attributes and then aligned to LOINC concepts through the appropriate combination of definitional attributes. Finally, using laboratory results of the Bordeaux data-warehouse, an instance-based filtering process has been applied. RESULTS: We found a small overlap between the tokens constituting the labels of TLAB and LOINC. However, the TLAB entities have been easily aligned to LOINC attributes. Thus, 99.8% of TLAB entities have been related to a LOINC analyte and 61.0% to a LOINC system. A total of 55.4% of used TLAB entities in the hospital data-warehouse have been mapped to LOINC concepts. We performed a manual evaluation of all 1-1 mappings between TLAB entities and LOINC concepts and obtained a precision of 0.59. CONCLUSION: We aligned TLAB and LOINC with reasonable performances, given the poor quality of TLAB labels. In terms of interoperability, the alignment of interface terminologies with LOINC could be improved through a more formal LOINC structure. This would allow queries on LOINC attributes rather than on LOINC concepts only

    Health systems data interoperability and implementation

    Get PDF
    Objective The objective of this study was to use machine learning and health standards to address the problem of clinical data interoperability across healthcare institutions. Addressing this problem has the potential to make clinical data comparable, searchable and exchangeable between healthcare providers. Data sources Structured and unstructured data has been used to conduct the experiments in this study. The data was collected from two disparate data sources namely MIMIC-III and NHanes. The MIMIC-III database stored data from two electronic health record systems which are CareVue and MetaVision. The data stored in these systems was not recorded with the same standards; therefore, it was not comparable because some values were conflicting, while one system would store an abbreviation of a clinical concept, the other would store the full concept name and some of the attributes contained missing information. These few issues that have been identified make this form of data a good candidate for this study. From the identified data sources, laboratory, physical examination, vital signs, and behavioural data were used for this study. Methods This research employed a CRISP-DM framework as a guideline for all the stages of data mining. Two sets of classification experiments were conducted, one for the classification of structured data, and the other for unstructured data. For the first experiment, Edit distance, TFIDF and JaroWinkler were used to calculate the similarity weights between two datasets, one coded with the LOINC terminology standard and another not coded. Similar sets of data were classified as matches while dissimilar sets were classified as non-matching. Then soundex indexing method was used to reduce the number of potential comparisons. Thereafter, three classification algorithms were trained and tested, and the performance of each was evaluated through the ROC curve. Alternatively the second experiment was aimed at extracting patient’s smoking status information from a clinical corpus. A sequence-oriented classification algorithm called CRF was used for learning related concepts from the given clinical corpus. Hence, word embedding, random indexing, and word shape features were used for understanding the meaning in the corpus. Results Having optimized all the model’s parameters through the v-fold cross validation on a sampled training set of structured data ( ), out of 24 features, only ( 8) were selected for a classification task. RapidMiner was used to train and test all the classification algorithms. On the final run of classification process, the last contenders were SVM and the decision tree classifier. SVM yielded an accuracy of 92.5% when the and parameters were set to and . These results were obtained after more relevant features were identified, having observed that the classifiers were biased on the initial data. On the other side, unstructured data was annotated via the UIMA Ruta scripting language, then trained through the CRFSuite which comes with the CLAMP toolkit. The CRF classifier obtained an F-measure of 94.8% for “nonsmoker” class, 83.0% for “currentsmoker”, and 65.7% for “pastsmoker”. It was observed that as more relevant data was added, the performance of the classifier improved. The results show that there is a need for the use of FHIR resources for exchanging clinical data between healthcare institutions. FHIR is free, it uses: profiles to extend coding standards; RESTFul API to exchange messages; and JSON, XML and turtle for representing messages. Data could be stored as JSON format on a NoSQL database such as CouchDB, which makes it available for further post extraction exploration. Conclusion This study has provided a method for learning a clinical coding standard by a computer algorithm, then applying that learned standard to unstandardized data so that unstandardized data could be easily exchangeable, comparable and searchable and ultimately achieve data interoperability. Even though this study was applied on a limited scale, in future, the study would explore the standardization of patient’s long-lived data from multiple sources using the SHARPn open-sourced tools and data scaling platformsInformation ScienceM. Sc. (Computing

    Linking patient data to scientific knowledge to support contextualized mining

    Get PDF
    Tese de mestrado, Bioinformática e Biologia Computacional, Universidade de Lisboa, Faculdade de Ciências, 2022ICU readmissions are a critical problem associated with either serious conditions, ill nesses, or complications, representing a 4 times increase in mortality risk and a financial burden to health institutions. In developed countries 1 in every 10 patients discharged comes back to the ICU. As hospitals become more and more data-oriented with the adop tion of Electronic Health Records (EHR), there as been a rise in the development of com putational approaches to support clinical decision. In recent years new efforts emerged, using machine learning approaches to make ICU readmission predictions directly over EHR data. Despite these growing efforts, machine learning approaches still explore EHR data directly without taking into account its mean ing or context. Medical knowledge is not accessible to these methods, who work blindly over the data, without considering the meaning and relationships the data objects. Ontolo gies and knowledge graphs can help bridge this gap between data and scientific context, since they are computational artefacts that represent the entities in a domain and how the relate to each other in a formalized fashion. This opportunity motivated the aim of this work: to investigate how enriching EHR data with ontology-based semantic annotations and applying machine learning techniques that explore them can impact the prediction of 30-day ICU readmission risk. To achieve this, a number of contributions were developed, including: (1) An enrichment of the MIMIC-III data set with annotations to several biomedical ontologies; (2) A novel ap proach to predict ICU readmission risk that explores knowledge graph embeddings to represent patient data taking into account the semantic annotations; (3) A variant of the predictive approach that targets different moments to support risk prediction throughout the ICU stay. The predictive approaches outperformed both state-of-the-art and a baseline achieving a ROC-AUC of 0.815 (an increase of 0.2 over the state of the art). The positive results achieved motivated the development of an entrepreneurial project, which placed in the Top 5 of the H-INNOVA 2021 entrepreneurship award

    Machine Learning Methods To Identify Hidden Phenotypes In The Electronic Health Record

    Get PDF
    The widespread adoption of Electronic Health Records (EHRs) means an unprecedented amount of patient treatment and outcome data is available to researchers. Research is a tertiary priority in the EHR, where the priorities are patient care and billing. Because of this, the data is not standardized or formatted in a manner easily adapted to machine learning approaches. Data may be missing for a large variety of reasons ranging from individual input styles to differences in clinical decision making, for example, which lab tests to issue. Few patients are annotated at a research quality, limiting sample size and presenting a moving gold standard. Patient progression over time is key to understanding many diseases but many machine learning algorithms require a snapshot, at a single time point, to create a usable vector form. In this dissertation, we develop new machine learning methods and computational workflows to extract hidden phenotypes from the Electronic Health Record (EHR). In Part 1, we use a semi-supervised deep learning approach to compensate for the low number of research quality labels present in the EHR. In Part 2, we examine and provide recommendations for characterizing and managing the large amount of missing data inherent to EHR data. In Part 3, we present an adversarial approach to generate synthetic data that closely resembles the original data while protecting subject privacy. We also introduce a workflow to enable reproducible research even when data cannot be shared. In Part 4, we introduce a novel strategy to first extract sequential data from the EHR and then demonstrate the ability to model these sequences with deep learning

    COHORT IDENTIFICATION FROM FREE-TEXT CLINICAL NOTES USING SNOMED CT’S SEMANTIC RELATIONS

    Get PDF
    In this paper, a new cohort identification framework that exploits the semantic hierarchy of SNOMED CT is proposed to overcome the limitations of supervised machine learning-based approaches. Eligibility criteria descriptions and free-text clinical notes from the 2018 National NLP Clinical Challenge (n2c2) were processed to map to relevant SNOMED CT concepts and to measure semantic similarity between the eligibility criteria and patients. The eligibility of a patient was determined if the patient had a similarity score higher than a threshold cut-off value, which was established where the best F1 score could be achieved. The performance of the proposed system was evaluated for three eligibility criteria. The current framework’s macro-average F1 score across three eligibility criteria was higher than the previously reported results of the 2018 n2c2 (0.933 vs. 0.889). This study demonstrated that SNOMED CT alone can be leveraged for cohort identification tasks without referring to external textual sources for training.Doctor of Philosoph

    Preface

    Get PDF

    Learning Clinical Data Representations for Machine Learning

    Get PDF
    corecore