5,458 research outputs found

    Infectious Disease Ontology

    Get PDF
    Technological developments have resulted in tremendous increases in the volume and diversity of the data and information that must be processed in the course of biomedical and clinical research and practice. Researchers are at the same time under ever greater pressure to share data and to take steps to ensure that data resources are interoperable. The use of ontologies to annotate data has proven successful in supporting these goals and in providing new possibilities for the automated processing of data and information. In this chapter, we describe different types of vocabulary resources and emphasize those features of formal ontologies that make them most useful for computational applications. We describe current uses of ontologies and discuss future goals for ontology-based computing, focusing on its use in the field of infectious diseases. We review the largest and most widely used vocabulary resources relevant to the study of infectious diseases and conclude with a description of the Infectious Disease Ontology (IDO) suite of interoperable ontology modules that together cover the entire infectious disease domain

    An Automated System for the Assessment and Ranking of Domain Ontologies

    Get PDF
    As the number of intelligent software applications and the number of semantic websites continue to expand, ontologies are needed to formalize shared terms. Often it is necessary to either find a previously used ontology for a particular purpose, or to develop a new one to meet a specific need. Because of the challenge involved in creating a new ontology from scratch, the latter option is often preferable. The ability of a user to select an appropriate, high-quality domain ontology from a set of available options would be most useful in knowledge engineering and in developing intelligent applications. Being able to assess an ontology\u27s quality and suitability is also important when an ontology is developed from the beginning. These capabilities, however, require good quality assessment mechanisms as well as automated support when there are a large number of ontologies from which to make a selection. This thesis provides an in-depth analysis of the current research in domain ontology evaluation, including the development of a taxonomy to categorize the numerous directions the research has taken. Based on the lessons learned from the literature review, an approach to the automatic assessment of domain ontologies is selected and a suite of ontology quality assessment metrics grounded in semiotic theory is presented. The metrics are implemented in a Domain Ontology Rating System (DoORS), which is made available as an open source web application. An additional framework is developed that would incorporate this rating system as part of a larger system to find ontology libraries on the web, retrieve ontologies from them, and assess them to select the best ontology for a particular task. An empirical evaluation in four phases shows the usefulness of the work, including a more stringent evaluation of the metrics that assess how well an ontology fits its domain and how well an ontology is regarded within its community of users

    Design and Architecture of an Ontology-driven Dialogue System for HPV Vaccine Counseling

    Get PDF
    Speech and conversational technologies are increasingly being used by consumers, with the inevitability that one day they will be integrated in health care. Where this technology could be of service is in patient-provider communication, specifically for communicating the risks and benefits of vaccines. Human papillomavirus (HPV) vaccine, in particular, is a vaccine that inoculates individuals from certain HPV viruses responsible for adulthood cancers - cervical, head and neck cancers, etc. My research focuses on the architecture and development of speech-enabled conversational agent that relies on series of consumer-centric health ontologies and the technology that utilizes these ontologies. Ontologies are computable artifacts that encode and structure domain knowledge that can be utilized by machines to provide high level capabilities, such as reasoning and sharing information. I will focus the agent’s impact on the HPV vaccine domain to observe if users would respond favorably towards conversational agents and the possible impact of the agent on their beliefs of the HPV vaccine. The approach of this study involves a multi-tier structure. The first tier is the domain knowledge base, the second is the application interaction design tier, and the third is the feasibility assessment of the participants. The research in this study proposes the following questions: Can ontologies support the system architecture for a spoken conversational agent for HPV vaccine counseling? How would prospective users’ perception towards an agent and towards the HPV vaccine be impacted after using conversational agent for HPV vaccine education? The outcome of this study is a comprehensive assessment of a system architecture of a conversational agent for patient-centric HPV vaccine counseling. Each layer of the agent architecture is regulated through domain and application ontologies, and supported by the various ontology-driven software components that I developed to compose the agent architecture. Also discussed in this work, I present preliminary evidence of high usability of the agent and improvement of the users’ health beliefs toward the HPV vaccine. All in all, I introduce a comprehensive and feasible model for the design and development of an open-sourced, ontology-driven conversational agent for any health consumer domain, and corroborate the viability of a conversational agent as a health intervention tool

    Enhancing Phenotype Recognition in Clinical Notes Using Large Language Models: PhenoBCBERT and PhenoGPT

    Full text link
    We hypothesize that large language models (LLMs) based on the transformer architecture can enable automated detection of clinical phenotype terms, including terms not documented in the HPO. In this study, we developed two types of models: PhenoBCBERT, a BERT-based model, utilizing Bio+Clinical BERT as its pre-trained model, and PhenoGPT, a GPT-based model that can be initialized from diverse GPT models, including open-source versions such as GPT-J, Falcon, and LLaMA, as well as closed-source versions such as GPT-3 and GPT-3.5. We compared our methods with PhenoTagger, a recently developed HPO recognition tool that combines rule-based and deep learning methods. We found that our methods can extract more phenotype concepts, including novel ones not characterized by HPO. We also performed case studies on biomedical literature to illustrate how new phenotype information can be recognized and extracted. We compared current BERT-based versus GPT-based models for phenotype tagging, in multiple aspects including model architecture, memory usage, speed, accuracy, and privacy protection. We also discussed the addition of a negation step and an HPO normalization layer to the transformer models for improved HPO term tagging. In conclusion, PhenoBCBERT and PhenoGPT enable the automated discovery of phenotype terms from clinical notes and biomedical literature, facilitating automated downstream tasks to derive new biological insights on human diseases

    Enhanced Neurologic Concept Recognition using a Named Entity Recognition Model based on Transformers

    Get PDF
    Although Deep Learning Has Been Applied to the Recognition of Diseases and Drugs in Electronic Health Records and the Biomedical Literature, Relatively Little Study Has Been Devoted to the Utility of Deep Learning for the Recognition of Signs and Symptoms. the Recognition of Signs and Symptoms is Critical to the Success of Deep Phenotyping and Precision Medicine. We Have Developed a Named Entity Recognition Model that Uses Deep Learning to Identify Text Spans Containing Neurological Signs and Symptoms and Then Maps These Text Spans to the Clinical Concepts of a Neuro-Ontology. We Compared a Model based on Convolutional Neural Networks to One based on Bidirectional Encoder Representation from Transformers. Models Were Evaluated for Accuracy of Text Span Identification on Three Text Corpora: Physician Notes from an Electronic Health Record, Case Histories from Neurologic Textbooks, and Clinical Synopses from an Online Database of Genetic Diseases. Both Models Performed Best on the Professionally-Written Clinical Synopses and Worst on the Physician-Written Clinical Notes. Both Models Performed Better When Signs and Symptoms Were Represented as Shorter Text Spans. Consistent with Prior Studies that Examined the Recognition of Diseases and Drugs, the Model based on Bidirectional Encoder Representations from Transformers Outperformed the Model based on Convolutional Neural Networks for Recognizing Signs and Symptoms. Recall for Signs and Symptoms Ranged from 59.5% to 82.0% and Precision Ranged from 61.7% to 80.4%. with Further Advances in NLP, Fully Automated Recognition of Signs and Symptoms in Electronic Health Records and the Medical Literature Should Be Feasible

    Cognition-based approaches for high-precision text mining

    Get PDF
    This research improves the precision of information extraction from free-form text via the use of cognitive-based approaches to natural language processing (NLP). Cognitive-based approaches are an important, and relatively new, area of research in NLP and search, as well as linguistics. Cognitive approaches enable significant improvements in both the breadth and depth of knowledge extracted from text. This research has made contributions in the areas of a cognitive approach to automated concept recognition in. Cognitive approaches to search, also called concept-based search, have been shown to improve search precision. Given the tremendous amount of electronic text generated in our digital and connected world, cognitive approaches enable substantial opportunities in knowledge discovery. The generation and storage of electronic text is ubiquitous, hence opportunities for improved knowledge discovery span virtually all knowledge domains. While cognition-based search offers superior approaches, challenges exist due to the need to mimic, even in the most rudimentary way, the extraordinary powers of human cognition. This research addresses these challenges in the key area of a cognition-based approach to automated concept recognition. In addition it resulted in a semantic processing system framework for use in applications in any knowledge domain. Confabulation theory was applied to the problem of automated concept recognition. This is a relatively new theory of cognition using a non-Bayesian measure, called cogency, for predicting the results of human cognition. An innovative distance measure derived from cogent confabulation and called inverse cogency, to rank order candidate concepts during the recognition process. When used with a multilayer perceptron, it improved the precision of concept recognition by 5% over published benchmarks. Additional precision improvements are anticipated. These research steps build a foundation for cognition-based, high-precision text mining. Long-term it is anticipated that this foundation enables a cognitive-based approach to automated ontology learning. Such automated ontology learning will mimic human language cognition, and will, in turn, enable the practical use of cognitive-based approaches in virtually any knowledge domain --Abstract, page iii

    NOBLE - Flexible concept recognition for large-scale biomedical natural language processing

    Get PDF
    Background: Natural language processing (NLP) applications are increasingly important in biomedical data analysis, knowledge engineering, and decision support. Concept recognition is an important component task for NLP pipelines, and can be either general-purpose or domain-specific. We describe a novel, flexible, and general-purpose concept recognition component for NLP pipelines, and compare its speed and accuracy against five commonly used alternatives on both a biological and clinical corpus. NOBLE Coder implements a general algorithm for matching terms to concepts from an arbitrary vocabulary set. The system's matching options can be configured individually or in combination to yield specific system behavior for a variety of NLP tasks. The software is open source, freely available, and easily integrated into UIMA or GATE. We benchmarked speed and accuracy of the system against the CRAFT and ShARe corpora as reference standards and compared it to MMTx, MGrep, Concept Mapper, cTAKES Dictionary Lookup Annotator, and cTAKES Fast Dictionary Lookup Annotator. Results: We describe key advantages of the NOBLE Coder system and associated tools, including its greedy algorithm, configurable matching strategies, and multiple terminology input formats. These features provide unique functionality when compared with existing alternatives, including state-of-the-art systems. On two benchmarking tasks, NOBLE's performance exceeded commonly used alternatives, performing almost as well as the most advanced systems. Error analysis revealed differences in error profiles among systems. Conclusion: NOBLE Coder is comparable to other widely used concept recognition systems in terms of accuracy and speed. Advantages of NOBLE Coder include its interactive terminology builder tool, ease of configuration, and adaptability to various domains and tasks. NOBLE provides a term-to-concept matching system suitable for general concept recognition in biomedical NLP pipelines
    • …
    corecore