12 research outputs found

    Improving Health Information Exchange by Combining Interoperable Resources and Digital Collection Integration Tools

    Get PDF
    Health information exchange plays a key role in any kind of healthcare system. An important barrier in this exchange is the lack of mechanisms and tools able to fully integrate information models that underlie existing healthcare systems. The current paper presents an approach to gathering multiple sources of clinical data by taking advantage of the HL7 FHIR (Fast Healthcare Interoperability Resources) format. The use of this standard format together with the Clavy tool enables a powerful approach to managing digital health collections that can easily be exchanged in different healthcare domains. In this sense, several content items for healthcare training, based on e-learning standards, have been generated from a clinical dataset that combines FHIR resources and DICOM images. Such a generation process shows the capability of the approach presented to coping with the exchange of health information based on multiple multimedia formats and controlled medical vocabularies

    A Consensus-based Data Quality Assessment Model for Patient Reported Outcome Information in Digital Quality Measurement Programs

    Get PDF
    Quality measurement has been evolving to become more patient-focused and more meaningful in supporting quality improvement. Recent advancements in digital data and measurement standards have made this evolution possible, but this move to digital measurement presents several challenges despite its many benefits. Digital quality measures (dQMs) substantially reduce the computational burden of generating “quality” knowledge and improve the reliability of the measure scores they generate, however they rely on a very specific presentation of the electronic data to achieve the aforementioned benefits. Newer dQMs based on patient-reported outcomes (PROs) measured using patient-reported outcome measures (PROMs) have been gaining attention as they generate valuable insight into a person’s perception of their own health status. Reliably capturing these insights is challenging however, as the information does not often exist in a format that can be processed by a dQM. This lack of standardization has resulted in the formation of clinical data repositories (CDRs) for the explicit purpose of extracting, transforming, and loading (ETL) PROM data from patients’ medical records into a format that can support digital quality measurement. These ETL processes are subject to rigorous evaluation to ensure that, as the information is being transformed, the integrity of the original information is being preserved. These evaluations inform decisions regarding data fitness for the specific purpose of using the data to measure quality of care. These “fit for purpose” decisions are not guided by a uniform set of expectations or requirements to assure consistency in decision-making, rather they frequently rely upon a variety of statistical and operational test results that can often present seemingly inconsistent information that requires substantial expertise to interpret and reconcile. A uniform, well-defined list of data quality concepts pertinent to using patient-reported outcome measures for the purpose of quality measurement would provide much needed guidance and enhance the consistency and reliability of data fitness decision-making. This research confirmed the scarcity of access to effective guidance for assessing fitness of PROM data and that there is a desire for a standard PROM-based data quality assessment (DQA) model to support decision making

    Beacon v2 and Beacon networks: A "lingua franca" for federated data discovery in biomedical genomics, and beyond

    Full text link
    Beacon is a basic data discovery protocol issued by the Global Alliance for Genomics and Health (GA4GH). The main goal addressed by version 1 of the Beacon protocol was to test the feasibility of broadly sharing human genomic data, through providing simple "yes" or "no" responses to queries about the presence of a given variant in datasets hosted by Beacon providers. The popularity of this concept has fostered the design of a version 2, that better serves real-world requirements and addresses the needs of clinical genomics research and healthcare, as assessed by several contributing projects and organizations. Particularly, rare disease genetics and cancer research will benefit from new case level and genomic variant level requests and the enabling of richer phenotype and clinical queries as well as support for fuzzy searches. Beacon is designed as a "lingua franca" to bridge data collections hosted in software solutions with different and rich interfaces. Beacon version 2 works alongside popular standards like Phenopackets, OMOP, or FHIR, allowing implementing consortia to return matches in beacon responses and provide a handover to their preferred data exchange format. The protocol is being explored by other research domains and is being tested in several international projects

    Scalable and accurate deep learning for electronic health records

    Get PDF
    Predictive modeling with electronic health record (EHR) data is anticipated to drive personalized medicine and improve healthcare quality. Constructing predictive statistical models typically requires extraction of curated predictor variables from normalized EHR data, a labor-intensive process that discards the vast majority of information in each patient's record. We propose a representation of patients' entire, raw EHR records based on the Fast Healthcare Interoperability Resources (FHIR) format. We demonstrate that deep learning methods using this representation are capable of accurately predicting multiple medical events from multiple centers without site-specific data harmonization. We validated our approach using de-identified EHR data from two U.S. academic medical centers with 216,221 adult patients hospitalized for at least 24 hours. In the sequential format we propose, this volume of EHR data unrolled into a total of 46,864,534,945 data points, including clinical notes. Deep learning models achieved high accuracy for tasks such as predicting in-hospital mortality (AUROC across sites 0.93-0.94), 30-day unplanned readmission (AUROC 0.75-0.76), prolonged length of stay (AUROC 0.85-0.86), and all of a patient's final discharge diagnoses (frequency-weighted AUROC 0.90). These models outperformed state-of-the-art traditional predictive models in all cases. We also present a case-study of a neural-network attribution system, which illustrates how clinicians can gain some transparency into the predictions. We believe that this approach can be used to create accurate and scalable predictions for a variety of clinical scenarios, complete with explanations that directly highlight evidence in the patient's chart.Comment: Published version from https://www.nature.com/articles/s41746-018-0029-

    Teaching and Collecting Technical Standards: A Handbook for Librarians and Educators

    Get PDF
    Technical standards are a vital source of information for providing guidelines during the design, manufacture, testing, and use of whole products, materials, and components. To prepare students—especially engineering students—for the workforce, universities are increasing the use of standards within the curriculum. Employers believe it is important for recent university graduates to be familiar with standards. Despite the critical role standards play within academia and the workforce, little information is available on the development of standards information literacy, which includes the ability to understand the standardization process; identify types of standards; and locate, evaluate, and use standards effectively. Libraries and librarians are a critical part of standards education, and much of the discussion has been focused on the curation of standards within libraries. However, librarians also have substantial experience in developing and teaching standards information literacy curriculum. With the need for universities to develop a workforce that is well-educated on the use of standards, librarians and course instructors can apply their experiences in information literacy toward teaching students the knowledge and skills regarding standards that they will need to be successful in their field. This title provides background information for librarians on technical standards as well as collection development best practices. It also creates a model for librarians and course instructors to use when building a standards information literacy curriculum.https://docs.lib.purdue.edu/pilh/1004/thumbnail.jp

    Preface

    Get PDF
    corecore