83 research outputs found

    Definition of a SNOMED CT pathology subset and microglossary, based on 1.17 million biological samples from the Catalan Pathology Registry

    Full text link
    SNOMED CT terminology is not backed by standard norms of encoding among pathologists. The vast number of concepts ordered in hierarchies and axes, together with the lack of rules of use, complicates the functionality of SNOMED CT for coding, extracting, and analyzing the data. Defining subgroups of SNOMED CT by discipline could increase its functionality. The challenge lies in how to choose the concepts to be included in a subset from a total of over 300,000. Besides, SNOMED CT does not cover daily need, as the clinical reality is dynamic and changing. To adapt SNOMED CT to needs in a flexible way, the possibility exists to create extensions. In Catalonia, most pathology departments have been migrating from SNOMED II to SNOMED CT in a bid to advance the development of the Catalan Pathology Registry, which was created in 2014 as a repository for all the pathological diagnoses. This article explains the methodology used to: (a) identify the clinico-pathological entities and the molecular diagnostic procedures not included in SNOMED CT; (b) define the theoretical subset and microglossary of pathology; (c) describe the SNOMED CT concepts used by pathologists of 1.17 million samples of the Catalan Pathology Registry; and d) adapt the theoretical subset and the microglossary according to the actual use of SNOMED CT. Of the 328,365 concepts available for coding the diagnoses (326,732 in SNOMED CT and 1,576 in Catalan extension), only 2% have been used. Combining two axes of SNOMED CT, body structure and clinical findings, has enabled coding most of the morphologies

    The LOINC RSNA radiology playbook - a unified terminology for radiology procedures

    Get PDF
    Objective: This paper describes the unified LOINC/RSNA Radiology Playbook and the process by which it was produced. Methods: The Regenstrief Institute and the Radiological Society of North America (RSNA) developed a unification plan consisting of six objectives 1) develop a unified model for radiology procedure names that represents the attributes with an extensible set of values, 2) transform existing LOINC procedure codes into the unified model representation, 3) create a mapping between all the attribute values used in the unified model as coded in LOINC (ie, LOINC Parts) and their equivalent concepts in RadLex, 4) create a mapping between the existing procedure codes in the RadLex Core Playbook and the corresponding codes in LOINC, 5) develop a single integrated governance process for managing the unified terminology, and 6) publicly distribute the terminology artifacts. Results: We developed a unified model and instantiated it in a new LOINC release artifact that contains the LOINC codes and display name (ie LONG_COMMON_NAME) for each procedure, mappings between LOINC and the RSNA Playbook at the procedure code level, and connections between procedure terms and their attribute values that are expressed as LOINC Parts and RadLex IDs. We transformed all the existing LOINC content into the new model and publicly distributed it in standard releases. The organizations have also developed a joint governance process for ongoing maintenance of the terminology. Conclusions: The LOINC/RSNA Radiology Playbook provides a universal terminology standard for radiology orders and results

    Embedding nursing interventions into the World Health Organization’s International Classification of Health Interventions (ICHI)

    Get PDF
    Objective: The International Classification of Health Interventions (ICHI) is currently being developed. ICHI seeks to span all sectors of the health system. Our objective was to test the draft classification’s coverage of interventions commonly delivered by nurses, and propose changes to improve the utility and reliability of the classification for aggregating and analyzing data on nursing interventions. Materials and methods: A two-phase content mapping method was used: (1) three coders independently applied the classification to a data set comprising 100 high-frequency nursing interventions; (2) the coders reached consensus for each intervention and identified reasons for initial discrepancies. Results: A consensus code was found for 80 of the 100 source terms: for 34% of these the code was semantically equivalent to the source term, and for 64% it was broader. Issues that contributed to discrepancies in Phase 1 coding results included concepts in source terms not captured by the classification, ambiguities in source terms, and uncertainty of semantic matching between ‘action’ concepts in source terms and classification codes. Discussion: While the classification generally provides good coverage of nursing interventions, there remain a number of content gaps and granularity issues. Further development of definitions and coding guidance is needed to ensure consistency of application. Conclusion: This study has produced a set of proposals concerning changes needed to improve the classification. The novel method described here will inform future health terminology and classification content coverage studies

    ACLRO: An Ontology for the Best Practice in ACLR Rehabilitation

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)With the rise of big data and the demands for leveraging artificial intelligence (AI), healthcare requires more knowledge sharing that offers machine-readable semantic formalization. Even though some applications allow shared data interoperability, they still lack formal machine-readable semantics in ICD9/10 and LOINC. With ontology, the further ability to represent the shared conceptualizations is possible, similar to SNOMED-CT. Nevertheless, SNOMED-CT mainly focuses on electronic health record (EHR) documenting and evidence-based practice. Moreover, due to its independence on data quality, the ontology enhances advanced AI technologies, such as machine learning (ML), by providing a reusable knowledge framework. Developing a machine-readable and sharable semantic knowledge model incorporating external evidence and individual practice’s values will create a new revolution for best practice medicine. The purpose of this research is to implement a sharable ontology for the best practice in healthcare, with anterior cruciate ligament reconstruction (ACLR) as a case study. The ontology represents knowledge derived from both evidence-based practice (EBP) and practice-based evidence (PBE). First, the study presents how the domain-specific knowledge model is built using a combination of Toronto Virtual Enterprise (TOVE) and a bottom-up approach. Then, I propose a top-down approach using Open Biological and Biomedical Ontology (OBO) Foundry ontologies that adheres to the Basic Formal Ontology (BFO)’s framework. In this step, the EBP, PBE, and statistic ontologies are developed independently. Next, the study integrates these individual ontologies into the final ACLR Ontology (ACLRO) as a more meaningful model that endorses the reusability and the ease of the model-expansion process since the classes can grow independently from one another. Finally, the study employs a use case and DL queries for model validation. The study's innovation is to present the ontology implementation for best-practice medicine and demonstrate how it can be applied to a real-world setup with semantic information. The ACLRO simultaneously emphasizes knowledge representation in health-intervention, statistics, research design, and external research evidence, while constructing the classes of data-driven and patient-focus processes that allow knowledge sharing explicit of technology. Additionally, the model synthesizes multiple related ontologies, which leads to the successful application of best-practice medicine

    Cohort Identification Using Semantic Web Technologies: Ontologies and Triplestores as Engines for Complex Computable Phenotyping

    Get PDF
    Electronic health record (EHR)-based computable phenotypes are algorithms used to identify individuals or populations with clinical conditions or events of interest within a clinical data repository. Due to a lack of EHR data standardization, computable phenotypes can be semantically ambiguous and difficult to share across institutions. In this research, I propose a new computable phenotyping methodological framework based on semantic web technologies, specifically ontologies, the Resource Description Framework (RDF) data format, triplestores, and Web Ontology Language (OWL) reasoning. My hypothesis is that storing and analyzing clinical data using these technologies can begin to address the critical issues of semantic ambiguity and lack of interoperability in the context of computable phenotyping. To test this hypothesis, I compared the performance of two variants of two computable phenotypes (for depression and rheumatoid arthritis, respectively). The first variant of each phenotype used a list of ICD-10-CM codes to define the condition; the second variant used ontology concepts from SNOMED and the Human Phenotype Ontology (HPO). After executing each variant of each phenotype against a clinical data repository, I compared the patients matched in each case to see where the different variants overlapped and diverged. Both the ontologies and the clinical data were stored in an RDF triplestore to allow me to assess the interoperability advantages of the RDF format for clinical data. All tested methods successfully identified cohorts in the data store, with differing rates of overlap and divergence between variants. Depending on the phenotyping use case, SNOMED and HPO’s ability to more broadly define many conditions due to complex relationships between their concepts may be seen as an advantage or a disadvantage. I also found that RDF triplestores do indeed provide interoperability advantages, despite being far less commonly used in clinical data applications than relational databases. Despite the fact that these methods and technologies are not “one-size-fits-all,” the experimental results are encouraging enough for them to (1) be put into practice in combination with existing phenotyping methods or (2) be used on their own for particularly well-suited use cases.Doctor of Philosoph

    Data Analytics of Codified Patient Data: Identifying Factors Influencing Coding Trends, Productivity, and Quality

    Get PDF
    Cost containment and quality of care have always been major challenges to the health care delivery system in the United States. Health care organizations utilize coded clinical data for health care monitoring, and reporting that includes a wide range of diseases and clinical conditions along with adverse events that could occur to patients during hospitalization. Furthermore, coded clinical data is utilized for patient safety and quality of care assessment in addition to research, education, resource allocation, and health service planning. Thus, it is critical to maintain high quality standards of clinical data and promote funding of health care research that addresses clinical data quality due to its direct impact on individual health outcomes as well as population health. This dissertation research is aimed at identifying current coding trends and other factors that could influence coding quality and productivity through two major emphases: (1) quality of coded clinical data; and (2) productivity of clinical coding. It has adopted a mix-method approach utilizing varied quantitative and qualitative data analysis techniques. Data analysis includes a wide range of univariate, bivariate, and multivariate analyses. Results of this study have shown that length of stay (LOS), case mix index (CMI) and DRG relative weight were not found to be significant predictors of coding quality. Based on the qualitative analysis, history and physical (H&P), discharge summary, and progress notes were identified as the three most common resources cited by Ciox auditors for coding changes. Also, results have shown that coding productivity in ICD-10 is improving over time. Length of stay, case mix index, DRG weight, and bed size were found to have a significant impact on coding productivity. Data related to coder’s demographics could not be secured for this analysis. However, factors related to coders such as education, credentials, and years of experience are believed to have a significant impact on coding quality as well as productivity. Linking coder’s demographics to coding quality and productivity data represents a promising area for future research

    An Evaluation of the ICD-10-CM System: Documentation Specificity, Reimbursement, and Methods for Improvement (International Classification of Diseases; 10th Revision; Clinical Modification)

    Get PDF
    The research project consists of three studies to identify the documentation specificity, reimbursement and documentation improvement for the upcoming International Classification of Diseases, 10th revision, Clinical Modification (ICD-10-CM) coding system. A descriptive research study using quantitative methods was conducted for the first study, which focused on coding electronic documents across each major diagnostic chapter for ICD-10-CM. The coding was ranked according to the Watzlaf et al (2007) study where a ranking score was provided if the diagnosis was fully captured by the ICD-10-CM code sets. The ICD-10-CM codes were then compared to the current ICD-9-CM codes to evaluate the details on the descriptions of the codes. The rankings were determined by comparing the ICD-10-CM systems for the number of codes, the level of specificity and the ability of the code description to fully capture the diagnostic term based on the resources available at the time of coding. A descriptive research study using quantitative methods was conducted for the second study, which focused on evaluating the reimbursement differences in coding with ICD-10- CM with and without the supporting documentation. Reimbursement amounts or the MS-DRG (Medicare Severity Diagnosis Related Groups) weight differences were examined to demonstrate the amount of dollars lost due to incomplete documentation. Reimbursement amounts were calculated by running the code set on the CMS ICD-10 grouper. An exploratory descriptive research study using qualitative methods was conducted for the third study which focused on developing a documentation improvement toolkit for providers and technology experts to guide them towards an accurate selection of codes. Furthermore a quick reference checklist geared towards the physician, coders and the information technology development team was developed based on their feedback and documentation needs. The results of the studies highlighted the clinical areas which needed the most documentation attention in order to accurately code in ICD-10-CM and the associated potential loss of revenue due to absent documentation. Further, the results from the educational tool kit could be used in the development of a better inpatient Computer Assisted Coding (CAC) product

    A standards-based ICT framework to enable a service-oriented approach to clinical decision support

    Get PDF
    This research provides evidence that standards based Clinical Decision Support (CDS) at the point of care is an essential ingredient of electronic healthcare service delivery. A Service Oriented Architecture (SOA) based solution is explored, that serves as a task management system to coordinate complex distributed and disparate IT systems, processes and resources (human and computer) to provide standards based CDS. This research offers a solution to the challenges in implementing computerised CDS such as integration with heterogeneous legacy systems. Reuse of components and services to reduce costs and save time. The benefits of a sharable CDS service that can be reused by different healthcare practitioners to provide collaborative patient care is demonstrated. This solution provides orchestration among different services by extracting data from sources like patient databases, clinical knowledge bases and evidence-based clinical guidelines (CGs) in order to facilitate multiple CDS requests coming from different healthcare settings. This architecture aims to aid users at different levels of Healthcare Delivery Organizations (HCOs) to maintain a CDS repository, along with monitoring and managing services, thus enabling transparency. The research employs the Design Science research methodology (DSRM) combined with The Open Group Architecture Framework (TOGAF), an open source group initiative for Enterprise Architecture Framework (EAF). DSRM’s iterative capability addresses the rapidly evolving nature of workflows in healthcare. This SOA based solution uses standards-based open source technologies and platforms, the latest healthcare standards by HL7 and OMG, Decision Support Service (DSS) and Retrieve, Update Locate Service (RLUS) standard. Combining business process management (BPM) technologies, business rules with SOA ensures the HCO’s capability to manage its processes. This architectural solution is evaluated by successfully implementing evidence based CGs at the point of care in areas such as; a) Diagnostics (Chronic Obstructive Disease), b) Urgent Referral (Lung Cancer), c) Genome testing and integration with CDS in screening (Lynch’s syndrome). In addition to medical care, the CDS solution can benefit organizational processes for collaborative care delivery by connecting patients, physicians and other associated members. This framework facilitates integration of different types of CDS ideal for the different healthcare processes, enabling sharable CDS capabilities within and across organizations

    Semantic concept extraction from electronic medical records for enhancing information retrieval performance

    Get PDF
    With the healthcare industry increasingly using EMRs, there emerges an opportunity for knowledge discovery within the healthcare domain that was not possible with paper-based medical records. One such opportunity is to discover UMLS concepts from EMRs. However, with opportunities come challenges that need to be addressed. Medical verbiage is very different from common English verbiage and it is reasonable to assume extracting any information from medical text requires different protocols than what is currently used in common English text. This thesis proposes two new semantic matching models: Term-Based Matching and CUI-Based Matching. These two models use specialized biomedical text mining tools that extract medical concepts from EMRs. Extensive experiments to rank the extracted concepts are conducted on the University of Pittsburgh BLULab NLP Repository for the TREC 2011 Medical Records track dataset that consists of 101,711 EMRs that contain concepts in 34 predefined topics. This thesis compares the proposed semantic matching models against the traditional weighting equations and information retrieval tools used in the academic world today
    corecore