7,507 research outputs found

    AutoDiscern: Rating the Quality of Online Health Information with Hierarchical Encoder Attention-based Neural Networks

    Get PDF
    Patients increasingly turn to search engines and online content before, or in place of, talking with a health professional. Low quality health information, which is common on the internet, presents risks to the patient in the form of misinformation and a possibly poorer relationship with their physician. To address this, the DISCERN criteria (developed at University of Oxford) are used to evaluate the quality of online health information. However, patients are unlikely to take the time to apply these criteria to the health websites they visit. We built an automated implementation of the DISCERN instrument (Brief version) using machine learning models. We compared the performance of a traditional model (Random Forest) with that of a hierarchical encoder attention-based neural network (HEA) model using two language embeddings, BERT and BioBERT. The HEA BERT and BioBERT models achieved average F1-macro scores across all criteria of 0.75 and 0.74, respectively, outperforming the Random Forest model (average F1-macro = 0.69). Overall, the neural network based models achieved 81% and 86% average accuracy at 100% and 80% coverage, respectively, compared to 94% manual rating accuracy. The attention mechanism implemented in the HEA architectures not only provided 'model explainability' by identifying reasonable supporting sentences for the documents fulfilling the Brief DISCERN criteria, but also boosted F1 performance by 0.05 compared to the same architecture without an attention mechanism. Our research suggests that it is feasible to automate online health information quality assessment, which is an important step towards empowering patients to become informed partners in the healthcare process

    Neural networks versus Logistic regression for 30 days all-cause readmission prediction

    Full text link
    Heart failure (HF) is one of the leading causes of hospital admissions in the US. Readmission within 30 days after a HF hospitalization is both a recognized indicator for disease progression and a source of considerable financial burden to the healthcare system. Consequently, the identification of patients at risk for readmission is a key step in improving disease management and patient outcome. In this work, we used a large administrative claims dataset to (1)explore the systematic application of neural network-based models versus logistic regression for predicting 30 days all-cause readmission after discharge from a HF admission, and (2)to examine the additive value of patients' hospitalization timelines on prediction performance. Based on data from 272,778 (49% female) patients with a mean (SD) age of 73 years (14) and 343,328 HF admissions (67% of total admissions), we trained and tested our predictive readmission models following a stratified 5-fold cross-validation scheme. Among the deep learning approaches, a recurrent neural network (RNN) combined with conditional random fields (CRF) model (RNNCRF) achieved the best performance in readmission prediction with 0.642 AUC (95% CI, 0.640-0.645). Other models, such as those based on RNN, convolutional neural networks and CRF alone had lower performance, with a non-timeline based model (MLP) performing worst. A competitive model based on logistic regression with LASSO achieved a performance of 0.643 AUC (95%CI, 0.640-0.646). We conclude that data from patient timelines improve 30 day readmission prediction for neural network-based models, that a logistic regression with LASSO has equal performance to the best neural network model and that the use of administrative data result in competitive performance compared to published approaches based on richer clinical datasets

    Comparison of Open-Source Electronic Health Record Systems Based on Functional and User Performance Criteria

    Get PDF
    Objectives: Open-source Electronic Health Record (EHR) systems have gained importance. The main aim of our research is to guide organizational choice by comparing the features, functionality, and user-facing system performance of the five most popular open-source EHR systems. Methods: We performed qualitative content analysis with a directed approach on recently published literature (2012-2017) to develop an integrated set of criteria to compare the EHR systems. The functional criteria are an integration of the literature, meaningful use criteria, and the Institute of Medicine's functional requirements of EHR, whereas the user-facing system performance is based on the time required to perform basic tasks within the EHR system. Results: Based on the Alexa web ranking and Google Trends, the five most popular EHR systems at the time of our study were OSHERA VistA, GNU Health, the Open Medical Record System (OpenMRS), Open Electronic Medical Record (OpenEMR), and OpenEHR. We also found the trends in popularity of the EHR systems and the locations where they were more popular than others. OpenEMR met all the 32 functional criteria, OSHERA VistA met 28, OpenMRS met 12 fully and 11 partially, OpenEHR-based EHR met 10 fully and 3 partially, and GNU Health met the least with only 10 criteria fully and 2 partially. Conclusions: Based on our functional criteria, OpenEMR is the most promising EHR system, closely followed by VistA. With regards to user-facing system performance, OpenMRS has superior performance in comparison to OpenEMR

    Automatic Clustering with Single Optimal Solution

    Get PDF
    Determining optimal number of clusters in a dataset is a challenging task. Though some methods are available, there is no algorithm that produces unique clustering solution. The paper proposes an Automatic Merging for Single Optimal Solution (AMSOS) which aims to generate unique and nearly optimal clusters for the given datasets automatically. The AMSOS is iteratively merges the closest clusters automatically by validating with cluster validity measure to find single and nearly optimal clusters for the given data set. Experiments on both synthetic and real data have proved that the proposed algorithm finds single and nearly optimal clustering structure in terms of number of clusters, compactness and separation.Comment: 13 pages,4 Tables, 3 figure

    Transaction Costs: A Conceptual Framework

    Get PDF
    Transaction Costs (TC) is a very important topic, especially in a changing work environment which has a large number of operational firms, and increasing business growth. The aim of this paper is to shed light on the transaction costs concept, and provide a conceptual framework to understand the meaning of transaction costs. Publications including articles and research papers have explained the notion of transaction costs and the theoretical issues related to them. The literature review reveals that, transaction costs are costs which arise because of the of a company‘s activities in the market , including (fees, commission, taxes) which are paid by the firm to provide a service or produce a good either to external parties or as internal costs. Therefore, according to the literature review. It emerges that firms must make a comparison between internal and external transaction costs and choose the lowest cost which enables them to increase profits. This means companies have to reduce transaction costs to the minimum level to achieve more profits and competitive advantage
    • …
    corecore