70 research outputs found

    Detection of primary Sjögren's syndrome in primary care: developing a classification model with the use of routine healthcare data and machine learning

    Get PDF
    Background: Primary Sjögren's Syndrome (pSS) is a rare autoimmune disease that is difficult to diagnose due to a variety of clinical presentations, resulting in misdiagnosis and late referral to specialists. To improve early-stage disease recognition, this study aimed to develop an algorithm to identify possible pSS patients in primary care. We built a machine learning algorithm which was based on combined healthcare data as a first step towards a clinical decision support system. Method: Routine healthcare data, consisting of primary care electronic health records (EHRs) data and hospital claims data (HCD), were linked on patient level and consisted of 1411 pSS and 929,179 non-pSS patients. Logistic regression (LR) and random forest (RF) models were used to classify patients using age, gender, diseases and symptoms, prescriptions and GP visits. Results: The LR and RF models had an AUC of 0.82 and 0.84, respectively. Many actual pSS patients were found (sensitivity LR = 72.3%, RF = 70.1%), specificity was 74.0% (LR) and 77.9% (RF) and the negative predictive value was 99.9% for both models. However, most patients classified as pSS patients did not have a diagnosis of pSS in secondary care (positive predictive value LR = 0.4%, RF = 0.5%). Conclusion: This is the first study to use machine learning to classify patients with pSS in primary care using GP EHR data. Our algorithm has the potential to support the early recognition of pSS in primary care and should be validated and optimized in clinical practice. To further enhance the algorithm in detecting pSS in primary care, we suggest it is improved by working with experienced clinicians

    Improving Prediction of Favourable Outcome After 6 Months in Patients with Severe Traumatic Brain Injury Using Physiological Cerebral Parameters in a Multivariable Logistic Regression Model.

    Get PDF
    BACKGROUND/OBJECTIVE: Current severe traumatic brain injury (TBI) outcome prediction models calculate the chance of unfavourable outcome after 6 months based on parameters measured at admission. We aimed to improve current models with the addition of continuously measured neuromonitoring data within the first 24 h after intensive care unit neuromonitoring. METHODS: Forty-five severe TBI patients with intracranial pressure/cerebral perfusion pressure monitoring from two teaching hospitals covering the period May 2012 to January 2019 were analysed. Fourteen high-frequency physiological parameters were selected over multiple time periods after the start of neuromonitoring (0-6 h, 0-12 h, 0-18 h, 0-24 h). Besides systemic physiological parameters and extended Corticosteroid Randomisation after Significant Head Injury (CRASH) score, we added estimates of (dynamic) cerebral volume, cerebral compliance and cerebrovascular pressure reactivity indices to the model. A logistic regression model was trained for each time period on selected parameters to predict outcome after 6 months. The parameters were selected using forward feature selection. Each model was validated by leave-one-out cross-validation. RESULTS: A logistic regression model using CRASH as the sole parameter resulted in an area under the curve (AUC) of 0.76. For each time period, an increased AUC was found using up to 5 additional parameters. The highest AUC (0.90) was found for the 0-6 h period using 5 parameters that describe mean arterial blood pressure and physiological cerebral indices. CONCLUSIONS: Current TBI outcome prediction models can be improved by the addition of neuromonitoring bedside parameters measured continuously within the first 24 h after the start of neuromonitoring. As these factors might be modifiable by treatment during the admission, testing in a larger (multicenter) data set is warranted

    Elite discourse and institutional innovation: making the hybrid happen in English public services

    Get PDF
    This paper focuses on the strategic role of elites in managing institutional and organizational change within English public services, framed by the wider ideological and political context of neo-liberalism and its pervasive impact on the social and economic order over recent decades. It also highlights the unintended consequences of this elite-driven programme of institutional reform as realized in the emergence of hybridized regimes of ‘polyarchic governance’ and the innovative discursive and organizational technologies on which they depend. Within the latter, ‘leaderism’ is identified as a hegemonic ‘discursive imaginary’ that has the potential to connect selected marketization and market control elements of new public management (NPM), network governance, and visionary and shared leadership practices that ‘make the hybrid happen’ in public services reform

    Peri-operative red blood cell transfusion in neonates and infants: NEonate and Children audiT of Anaesthesia pRactice IN Europe: A prospective European multicentre observational study

    Get PDF
    BACKGROUND: Little is known about current clinical practice concerning peri-operative red blood cell transfusion in neonates and small infants. Guidelines suggest transfusions based on haemoglobin thresholds ranging from 8.5 to 12 g dl-1, distinguishing between children from birth to day 7 (week 1), from day 8 to day 14 (week 2) or from day 15 (≥week 3) onwards. OBJECTIVE: To observe peri-operative red blood cell transfusion practice according to guidelines in relation to patient outcome. DESIGN: A multicentre observational study. SETTING: The NEonate-Children sTudy of Anaesthesia pRactice IN Europe (NECTARINE) trial recruited patients up to 60 weeks' postmenstrual age undergoing anaesthesia for surgical or diagnostic procedures from 165 centres in 31 European countries between March 2016 and January 2017. PATIENTS: The data included 5609 patients undergoing 6542 procedures. Inclusion criteria was a peri-operative red blood cell transfusion. MAIN OUTCOME MEASURES: The primary endpoint was the haemoglobin level triggering a transfusion for neonates in week 1, week 2 and week 3. Secondary endpoints were transfusion volumes, 'delta haemoglobin' (preprocedure - transfusion-triggering) and 30-day and 90-day morbidity and mortality. RESULTS: Peri-operative red blood cell transfusions were recorded during 447 procedures (6.9%). The median haemoglobin levels triggering a transfusion were 9.6 [IQR 8.7 to 10.9] g dl-1 for neonates in week 1, 9.6 [7.7 to 10.4] g dl-1 in week 2 and 8.0 [7.3 to 9.0] g dl-1 in week 3. The median transfusion volume was 17.1 [11.1 to 26.4] ml kg-1 with a median delta haemoglobin of 1.8 [0.0 to 3.6] g dl-1. Thirty-day morbidity was 47.8% with an overall mortality of 11.3%. CONCLUSIONS: Results indicate lower transfusion-triggering haemoglobin thresholds in clinical practice than suggested by current guidelines. The high morbidity and mortality of this NECTARINE sub-cohort calls for investigative action and evidence-based guidelines addressing peri-operative red blood cell transfusions strategies. TRIAL REGISTRATION: ClinicalTrials.gov, identifier: NCT02350348

    Robust estimation of bacterial cell count from optical density

    Get PDF
    Optical density (OD) is widely used to estimate the density of cells in liquid culture, but cannot be compared between instruments without a standardized calibration protocol and is challenging to relate to actual cell count. We address this with an interlaboratory study comparing three simple, low-cost, and highly accessible OD calibration protocols across 244 laboratories, applied to eight strains of constitutive GFP-expressing E. coli. Based on our results, we recommend calibrating OD to estimated cell count using serial dilution of silica microspheres, which produces highly precise calibration (95.5% of residuals <1.2-fold), is easily assessed for quality control, also assesses instrument effective linear range, and can be combined with fluorescence calibration to obtain units of Molecules of Equivalent Fluorescein (MEFL) per cell, allowing direct comparison and data fusion with flow cytometry measurements: in our study, fluorescence per cell measurements showed only a 1.07-fold mean difference between plate reader and flow cytometry data

    Economics education and value change: The role of program-normative homogeneity and peer influence

    Get PDF
    In the light of corporate scandals and the recent financial crisis, there has been an increased interest in the impact of business education on the value orientations of graduates. Yet our understanding of how students' values change during their time at business school is limited. In this study,weinvestigate the effects of variations in the normative orientations of economics programs. We argue that interaction among economics students constitutes a key mechanism of value socialization, the effects of which are likely to vary across more-or-less normatively homogeneous economics programs. In normatively homogeneous programs, students are particularly likely to adopt economics values as a result of peer interaction. We specifically explore changes in power, hedonism, and self-direction values in a 2-year longitudinal study of economics students (N 5 197) in a normatively homogeneous and two normatively heterogeneous economics programs. As expected, for students in a normatively homogeneous economics program, interaction with peers was linked with an increase in power and hedonism values, and a decrease in self-direction values. Our findings highlight the interplay between program normative homogeneity and peer interaction as an important factor in value socialization during economics education and have important practical implications for business school leaders

    A Generative and Causal Pharmacokinetic Model for Factor VIII in Hemophilia A:A Machine Learning Framework for Continuous Model Refinement

    Get PDF
    In rare diseases, such as hemophilia A, the development of accurate population pharmacokinetic (PK) models is often hindered by the limited availability of data. Most PK models are specific to a single recombinant factor VIII (rFVIII) concentrate or measurement assay, and are generally unsuited for answering counterfactual (“what-if”) queries. Ideally, data from multiple hemophilia treatment centers are combined but this is generally difficult as patient data are kept private. In this work, we utilize causal inference techniques to produce a hybrid machine learning (ML) PK model that corrects for differences between rFVIII concentrates and measurement assays. Next, we augment this model with a generative model that can simulate realistic virtual patients as well as impute missing data. This model can be shared instead of actual patient data, resolving privacy issues. The hybrid ML-PK model was trained on chromogenic assay data of lonoctocog alfa and predictive performance was then evaluated on an external data set of patients who received octocog alfa with FVIII levels measured using the one-stage assay. The model presented higher accuracy compared with three previous PK models developed on data similar to the external data set (root mean squared error = 14.6 IU/dL vs. mean of 17.7 IU/dL). Finally, we show that the generative model can be used to accurately impute missing data (&lt; 18% error). In conclusion, the proposed approach introduces interesting new possibilities for model development. In the context of rare disease, the introduction of generative models facilitates sharing of synthetic data, enabling the iterative improvement of population PK models.</p

    Prediction of heart failure 1 year before diagnosis in general practitioner patients using machine learning algorithms: a retrospective case-control study

    No full text
    Objectives Heart failure (HF) is a commonly occurring health problem with high mortality and morbidity. If potential cases could be detected earlier, it may be possible to intervene earlier, which may slow progression in some patients. Preferably, it is desired to reuse already measured data for screening of all persons in an age group, such as general practitioner (GP) data. Furthermore, it is essential to evaluate the number of people needed to screen to find one patient using true incidence rates, as this indicates the generalisability in the true population. Therefore, we aim to create a machine learning model for the prediction of HF using GP data and evaluate the number needed to screen with true incidence rates. Design, settings and participants GP data from 8543 patients (-2 to -1 year before diagnosis) and controls aged 70+ years were obtained retrospectively from 01 January 2012 to 31 December 2019 from the Nivel Primary Care Database. Codes about chronic illness, complaints, diagnostics and medication were obtained. Data were split in a train/test set. Datasets describing demographics, the presence of codes (non-sequential) and upon each other following codes (sequential) were created. Logistic regression, random forest and XGBoost models were trained. Predicted outcome was the presence of HF after 1 year. The ratio case:control in the test set matched true incidence rates (1:45). Results Sole demographics performed average (area under the curve (AUC) 0.692, CI 0.677 to 0.706). Adding non-sequential information combined with a logistic regression model performed best and significantly improved performance (AUC 0.772, CI 0.759 to 0.785, p<0.001). Further adding sequential information did not alter performance significantly (AUC 0.767, CI 0.754 to 0.780, p=0.07). The number needed to screen dropped from 14.11 to 5.99 false positives per true positive. Conclusion This study created a model able to identify patients with pending HF a year before diagnosis
    corecore