190 research outputs found

    Ontology-driven and weakly supervised rare disease identification from clinical notes

    Get PDF
    BACKGROUND: Computational text phenotyping is the practice of identifying patients with certain disorders and traits from clinical notes. Rare diseases are challenging to be identified due to few cases available for machine learning and the need for data annotation from domain experts. METHODS: We propose a method using ontologies and weak supervision, with recent pre-trained contextual representations from Bi-directional Transformers (e.g. BERT). The ontology-driven framework includes two steps: (i) Text-to-UMLS, extracting phenotypes by contextually linking mentions to concepts in Unified Medical Language System (UMLS), with a Named Entity Recognition and Linking (NER+L) tool, SemEHR, and weak supervision with customised rules and contextual mention representation; (ii) UMLS-to-ORDO, matching UMLS concepts to rare diseases in Orphanet Rare Disease Ontology (ORDO). The weakly supervised approach is proposed to learn a phenotype confirmation model to improve Text-to-UMLS linking, without annotated data from domain experts. We evaluated the approach on three clinical datasets, MIMIC-III discharge summaries, MIMIC-III radiology reports, and NHS Tayside brain imaging reports from two institutions in the US and the UK, with annotations. RESULTS: The improvements in the precision were pronounced (by over 30% to 50% absolute score for Text-to-UMLS linking), with almost no loss of recall compared to the existing NER+L tool, SemEHR. Results on radiology reports from MIMIC-III and NHS Tayside were consistent with the discharge summaries. The overall pipeline processing clinical notes can extract rare disease cases, mostly uncaptured in structured data (manually assigned ICD codes). CONCLUSION: The study provides empirical evidence for the task by applying a weakly supervised NLP pipeline on clinical notes. The proposed weak supervised deep learning approach requires no human annotation except for validation and testing, by leveraging ontologies, NER+L tools, and contextual representations. The study also demonstrates that Natural Language Processing (NLP) can complement traditional ICD-based approaches to better estimate rare diseases in clinical notes. We discuss the usefulness and limitations of the weak supervision approach and propose directions for future studies

    Entity Linking in Low-Annotation Data Settings

    Get PDF
    Recent advances in natural language processing have focused on applying and adapting large pretrained language models to specific tasks. These models, such as BERT (Devlin et al., 2019) and BART (Lewis et al., 2020a), are pretrained on massive amounts of unlabeled text across a variety of domains. The impact of these pretrained models is visible in the task of entity linking, where a mention of an entity in unstructured text is matched to the relevant entry in a knowledge base. State-of-the-art linkers, such as Wu et al. (2020) and De Cao et al. (2021), leverage pretrained models as a foundation for their systems. However, these models are also trained on large amounts of annotated data, which is crucial to their performance. Often these large datasets consist of domains that are easily annotated, such as Wikipedia or newswire text. However, tailoring NLP tools to a narrow variety of textual domains severely restricts their use in the real world. Many other domains, such as medicine or law, do not have large amounts of entity linking annotations available. Entity linking, which serves to bridge the gap between massive unstructured amounts of text and structured repositories of knowledge, is equally crucial in these domains. Yet tools trained on newswire or Wikipedia annotations are unlikely to be well-suited for identifying medical conditions mentioned in clinical notes. As most annotation efforts focus on English, similar challenges can be noted in building systems for non-English text. There is often a relatively small amount of annotated data in these domains. With this being the case, looking to other types of domain-specific data, such as unannotated text or highly-curated structured knowledge bases, is often required. In these settings, it is crucial to translate lessons taken from tools tailored for high-annotation domains into algorithms that are suited for low-annotation domains. This requires both leveraging broader types of data and understanding the unique challenges present in each domain

    Annotated dataset creation through large language models for non-english medical NLP

    Get PDF
    Obtaining text datasets with semantic annotations is an effortful process, yet crucial for supervised training in natural language processing (NLP). In general, developing and applying new NLP pipelines in domain-specific contexts for tasks often requires custom-designed datasets to address NLP tasks in a supervised machine learning fashion. When operating in non-English languages for medical data processing, this exposes several minor and major, interconnected problems such as the lack of task-matching datasets as well as task-specific pre-trained models. In our work, we suggest to leverage pre-trained large language models for training data acquisition in order to retrieve sufficiently large datasets for training smaller and more efficient models for use-case-specific tasks. To demonstrate the effectiveness of your approach, we create a custom dataset that we use to train a medical NER model for German texts, GPTNERMED, yet our method remains language-independent in principle. Our obtained dataset as well as our pre-trained models are publicly available at https://github.com/frankkramer-lab/GPTNERMED

    Managing healthcare transformation towards P5 medicine (Published in Frontiers in Medicine)

    Get PDF
    Health and social care systems around the world are facing radical organizational, methodological and technological paradigm changes to meet the requirements for improving quality and safety of care as well as efficiency and efficacy of care processes. In this they’re trying to manage the challenges of ongoing demographic changes towards aging, multi-diseased societies, development of human resources, a health and social services consumerism, medical and biomedical progress, and exploding costs for health-related R&D as well as health services delivery. Furthermore, they intend to achieve sustainability of global health systems by transforming them towards intelligent, adaptive and proactive systems focusing on health and wellness with optimized quality and safety outcomes. The outcome is a transformed health and wellness ecosystem combining the approaches of translational medicine, 5P medicine (personalized, preventive, predictive, participative precision medicine) and digital health towards ubiquitous personalized health services realized independent of time and location. It considers individual health status, conditions, genetic and genomic dispositions in personal social, occupational, environmental and behavioural context, thus turning health and social care from reactive to proactive. This requires the advancement communication and cooperation among the business actors from different domains (disciplines) with different methodologies, terminologies/ontologies, education, skills and experiences from data level (data sharing) to concept/knowledge level (knowledge sharing). The challenge here is the understanding and the formal as well as consistent representation of the world of sciences and practices, i.e. of multidisciplinary and dynamic systems in variable context, for enabling mapping between the different disciplines, methodologies, perspectives, intentions, languages, etc. Based on a framework for dynamically, use-case-specifically and context aware representing multi-domain ecosystems including their development process, systems, models and artefacts can be consistently represented, harmonized and integrated. The response to that problem is the formal representation of health and social care ecosystems through an system-oriented, architecture-centric, ontology-based and policy-driven model and framework, addressing all domains and development process views contributing to the system and context in question. Accordingly, this Research Topic would like to address this change towards 5P medicine. Specifically, areas of interest include, but are not limited: • A multidisciplinary approach to the transformation of health and social systems • Success factors for sustainable P5 ecosystems • AI and robotics in transformed health ecosystems • Transformed health ecosystems challenges for security, privacy and trust • Modelling digital health systems • Ethical challenges of personalized digital health • Knowledge representation and management of transformed health ecosystems Table of Contents: 04 Editorial: Managing healthcare transformation towards P5 medicine Bernd Blobel and Dipak Kalra 06 Transformation of Health and Social Care Systems—An Interdisciplinary Approach Toward a Foundational Architecture Bernd Blobel, Frank Oemig, Pekka Ruotsalainen and Diego M. Lopez 26 Transformed Health Ecosystems—Challenges for Security, Privacy, and Trust Pekka Ruotsalainen and Bernd Blobel 36 Success Factors for Scaling Up the Adoption of Digital Therapeutics Towards the Realization of P5 Medicine Alexandra Prodan, Lucas Deimel, Johannes Ahlqvist, Strahil Birov, Rainer Thiel, Meeri Toivanen, Zoi Kolitsi and Dipak Kalra 49 EU-Funded Telemedicine Projects – Assessment of, and Lessons Learned From, in the Light of the SARS-CoV-2 Pandemic Laura Paleari, Virginia Malini, Gabriella Paoli, Stefano Scillieri, Claudia Bighin, Bernd Blobel and Mauro Giacomini 60 A Review of Artificial Intelligence and Robotics in Transformed Health Ecosystems Kerstin Denecke and Claude R. Baudoin 73 Modeling digital health systems to foster interoperability Frank Oemig and Bernd Blobel 89 Challenges and solutions for transforming health ecosystems in low- and middle-income countries through artificial intelligence Diego M. López, Carolina Rico-Olarte, Bernd Blobel and Carol Hullin 111 Linguistic and ontological challenges of multiple domains contributing to transformed health ecosystems Markus Kreuzthaler, Mathias Brochhausen, Cilia Zayas, Bernd Blobel and Stefan Schulz 126 The ethical challenges of personalized digital health Els Maeckelberghe, Kinga Zdunek, Sara Marceglia, Bobbie Farsides and Michael Rigb

    Automated retrieval and analysis of published biomedical literature through natural language processing for clinical applications

    Get PDF
    The size of the existing academic literature corpus and the incredible rate of new publications offers a great need and opportunity to harness computational approaches to data and knowledge extraction across all research fields. Elements of this challenge can be met by developments in automation for retrieval of electronic documents, document classification and knowledge extraction. In this thesis, I detail studies of these processes in three related chapters. Although the focus of each chapter is distinct, they contribute to my aim of developing a generalisable pipeline for clinical applications in Natural Language Processing in the academic literature. In chapter one, I describe the development of “Cadmus”, An open-source system developed in Python to generate corpora of biomedical text from the published literature. Cadmus comprises three main steps: Search query & meta-data collection, document retrieval, and parsing of the retrieved text. I present an example of full-text retrieval for a corpus of over two hundred thousand articles using a gene-based search query with quality control metrics for this retrieval process and a high-level illustration of the utility of full text over metadata for each article. For a corpus of 204,043 articles, the retrieval rate was 85.2% with institutional subscription access and 54.4% without. Chapter Two details developing a custom-built Naïve Bayes supervised machine learning document classifier. This binary classifier is based on calculating the relative enrichment of biomedical terms between two classes of documents in a training set. The classifier is trained and tested upon a manually classified set of over 8000 abstract and full-text articles to identify articles containing human phenotype descriptions. 10-fold cross-validation of the model showed a performance of recall of 85%, specificity of 99%, Precision of 0.76%, f1 score of 0.82 and accuracy of 90%. Chapter three illustrates the clinical applications of automated retrieval, processing, and classification by considering the published literature on Paediatric COVID-19. Case reports and similar articles were classified into “severe” and “non-severe” classes, and term enrichment was evaluated to find biomarkers associated with, or predictive of, severe paediatric COVID-19. Time series analysis was employed to illustrate emerging disease entities like the Multisystem Inflammatory Syndrome in Children (MIS-C) and consider unrecognised trends through literature-based discovery

    Knowledge extraction from unstructured data

    Get PDF
    Data availability is becoming more essential, considering the current growth of web-based data. The data available on the web are represented as unstructured, semi-structured, or structured data. In order to make the web-based data available for several Natural Language Processing or Data Mining tasks, the data needs to be presented as machine-readable data in a structured format. Thus, techniques for addressing the problem of capturing knowledge from unstructured data sources are needed. Knowledge extraction methods are used by the research communities to address this problem; methods that are able to capture knowledge in a natural language text and map the extracted knowledge to existing knowledge presented in knowledge graphs (KGs). These knowledge extraction methods include Named-entity recognition, Named-entity Disambiguation, Relation Recognition, and Relation Linking. This thesis addresses the problem of extracting knowledge over unstructured data and discovering patterns in the extracted knowledge. We devise a rule-based approach for entity and relation recognition and linking. The defined approach effectively maps entities and relations within a text to their resources in a target KG. Additionally, it overcomes the challenges of recognizing and linking entities and relations to a specific KG by employing devised catalogs of linguistic and domain-specific rules that state the criteria to recognize entities in a sentence of a particular language, and a deductive database that encodes knowledge in community-maintained KGs. Moreover, we define a Neuro-symbolic approach for the tasks of knowledge extraction in encyclopedic and domain-specific domains; it combines symbolic and sub-symbolic components to overcome the challenges of entity recognition and linking and the limitation of the availability of training data while maintaining the accuracy of recognizing and linking entities. Additionally, we present a context-aware framework for unveiling semantically related posts in a corpus; it is a knowledge-driven framework that retrieves associated posts effectively. We cast the problem of unveiling semantically related posts in a corpus into the Vertex Coloring Problem. We evaluate the performance of our techniques on several benchmarks related to various domains for knowledge extraction tasks. Furthermore, we apply these methods in real-world scenarios from national and international projects. The outcomes show that our techniques are able to effectively extract knowledge encoded in unstructured data and discover patterns over the extracted knowledge presented as machine-readable data. More importantly, the evaluation results provide evidence to the effectiveness of combining the reasoning capacity of the symbolic frameworks with the power of pattern recognition and classification of sub-symbolic models

    24th Nordic Conference on Computational Linguistics (NoDaLiDa)

    Get PDF

    Geographic information extraction from texts

    Get PDF
    A large volume of unstructured texts, containing valuable geographic information, is available online. This information – provided implicitly or explicitly – is useful not only for scientific studies (e.g., spatial humanities) but also for many practical applications (e.g., geographic information retrieval). Although large progress has been achieved in geographic information extraction from texts, there are still unsolved challenges and issues, ranging from methods, systems, and data, to applications and privacy. Therefore, this workshop will provide a timely opportunity to discuss the recent advances, new ideas, and concepts but also identify research gaps in geographic information extraction
    • …
    corecore