15 research outputs found

    Development and validation of machine learning models to predict gastrointestinal leak and venous thromboembolism after weight loss surgery: an analysis of the MBSAQIP database.

    Get PDF
    BackgroundPostoperative gastrointestinal leak and venous thromboembolism (VTE) are devastating complications of bariatric surgery. The performance of currently available predictive models for these complications remains wanting, while machine learning has shown promise to improve on traditional modeling approaches. The purpose of this study was to compare the ability of two machine learning strategies, artificial neural networks (ANNs), and gradient boosting machines (XGBs) to conventional models using logistic regression (LR) in predicting leak and VTE after bariatric surgery.MethodsANN, XGB, and LR prediction models for leak and VTE among adults undergoing initial elective weight loss surgery were trained and validated using preoperative data from 2015 to 2017 from Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program database. Data were randomly split into training, validation, and testing populations. Model performance was measured by the area under the receiver operating characteristic curve (AUC) on the testing data for each model.ResultsThe study cohort contained 436,807 patients. The incidences of leak and VTE were 0.70% and 0.46%. ANN (AUC 0.75, 95% CI 0.73-0.78) was the best-performing model for predicting leak, followed by XGB (AUC 0.70, 95% CI 0.68-0.72) and then LR (AUC 0.63, 95% CI 0.61-0.65, p < 0.001 for all comparisons). In detecting VTE, ANN, and XGB, LR achieved similar AUCs of 0.65 (95% CI 0.63-0.68), 0.67 (95% CI 0.64-0.70), and 0.64 (95% CI 0.61-0.66), respectively; the performance difference between XGB and LR was statistically significant (p = 0.001).ConclusionsANN and XGB outperformed traditional LR in predicting leak. These results suggest that ML has the potential to improve risk stratification for bariatric surgery, especially as techniques to extract more granular data from medical records improve. Further studies investigating the merits of machine learning to improve patient selection and risk management in bariatric surgery are warranted

    Gender-based time discrepancy in diagnosis of coronary artery disease based on data analytics of electronic medical records.

    No full text
    BackgroundWomen continue to have worse Coronary Artery Disease (CAD) outcomes than men. The causes of this discrepancy have yet to be fully elucidated. The main objective of this study is to detect gender discrepancies in the diagnosis and treatment of CAD.MethodsWe used data analytics to risk stratify ~32,000 patients with CAD of the total 960,129 patients treated at the UCSF Medical Center over an 8 year period. We implemented a multidimensional data analytics framework to trace patients from admission through treatment to create a path of events. Events are any medications or noninvasive and invasive procedures. The time between events for a similar set of paths was calculated. Then, the average waiting time for each step of the treatment was calculated. Finally, we applied statistical analysis to determine differences in time between diagnosis and treatment steps for men and women.ResultsThere is a significant time difference from the first time of admission to diagnostic Cardiac Catheterization between genders (p-value = 0.000119), while the time difference from diagnostic Cardiac Catheterization to CABG is not statistically significant.ConclusionWomen had a significantly longer interval between their first physician encounter indicative of CAD and their first diagnostic cardiac catheterization compared to men. Avoiding this delay in diagnosis may provide more timely treatment and a better outcome for patients at risk. Finally, we conclude by discussing the impact of the study on improving patient care with early detection and managing individual patients at risk of rapid progression of CAD

    Postoperative delirium prediction using machine learning models and preoperative electronic health record data.

    No full text
    BackgroundAccurate, pragmatic risk stratification for postoperative delirium (POD) is necessary to target preventative resources toward high-risk patients. Machine learning (ML) offers a novel approach to leveraging electronic health record (EHR) data for POD prediction. We sought to develop and internally validate a ML-derived POD risk prediction model using preoperative risk features, and to compare its performance to models developed with traditional logistic regression.MethodsThis was a retrospective analysis of preoperative EHR data from 24,885 adults undergoing a procedure requiring anesthesia care, recovering in the main post-anesthesia care unit, and staying in the hospital at least overnight between December 2016 and December 2019 at either of two hospitals in a tertiary care health system. One hundred fifteen preoperative risk features including demographics, comorbidities, nursing assessments, surgery type, and other preoperative EHR data were used to predict postoperative delirium (POD), defined as any instance of Nursing Delirium Screening Scale ≥2 or positive Confusion Assessment Method for the Intensive Care Unit within the first 7 postoperative days. Two ML models (Neural Network and XGBoost), two traditional logistic regression models ("clinician-guided" and "ML hybrid"), and a previously described delirium risk stratification tool (AWOL-S) were evaluated using the area under the receiver operating characteristic curve (AUC-ROC), sensitivity, specificity, positive likelihood ratio, and positive predictive value. Model calibration was assessed with a calibration curve. Patients with no POD assessments charted or at least 20% of input variables missing were excluded.ResultsPOD incidence was 5.3%. The AUC-ROC for Neural Net was 0.841 [95% CI 0. 816-0.863] and for XGBoost was 0.851 [95% CI 0.827-0.874], which was significantly better than the clinician-guided (AUC-ROC 0.763 [0.734-0.793], p < 0.001) and ML hybrid (AUC-ROC 0.824 [0.800-0.849], p < 0.001) regression models and AWOL-S (AUC-ROC 0.762 [95% CI 0.713-0.812], p < 0.001). Neural Net, XGBoost, and ML hybrid models demonstrated excellent calibration, while calibration of the clinician-guided and AWOL-S models was moderate; they tended to overestimate delirium risk in those already at highest risk.ConclusionUsing pragmatically collected EHR data, two ML models predicted POD in a broad perioperative population with high discrimination. Optimal application of the models would provide automated, real-time delirium risk stratification to improve perioperative management of surgical patients at risk for POD

    Machine Learning Prediction of Liver Allograft Utilization From Deceased Organ Donors Using the National Donor Management Goals Registry.

    No full text
    Early prediction of whether a liver allograft will be utilized for transplantation may allow better resource deployment during donor management and improve organ allocation. The national donor management goals (DMG) registry contains critical care data collected during donor management. We developed a machine learning model to predict transplantation of a liver graft based on data from the DMG registry.MethodsSeveral machine learning classifiers were trained to predict transplantation of a liver graft. We utilized 127 variables available in the DMG dataset. We included data from potential deceased organ donors between April 2012 and January 2019. The outcome was defined as liver recovery for transplantation in the operating room. The prediction was made based on data available 12-18 h after the time of authorization for transplantation. The data were randomly separated into training (60%), validation (20%), and test sets (20%). We compared the performance of our models to the Liver Discard Risk Index.ResultsOf 13 629 donors in the dataset, 9255 (68%) livers were recovered and transplanted, 1519 recovered but used for research or discarded, 2855 were not recovered. The optimized gradient boosting machine classifier achieved an area under the curve of the receiver operator characteristic of 0.84 on the test set, outperforming all other classifiers.ConclusionsThis model predicts successful liver recovery for transplantation in the operating room, using data available early during donor management. It performs favorably when compared to existing models. It may provide real-time decision support during organ donor management and transplant logistics

    Reduced-gravity Environment Hardware Demonstrations of a Prototype Miniaturized Flow Cytometer and Companion Microfluidic Mixing Technology

    No full text
    Until recently, astronaut blood samples were collected in-flight, transported to earth on the Space Shuttle, and analyzed in terrestrial laboratories. If humans are to travel beyond low Earth orbit, a transition towards space-ready, point-of-care (POC) testing is required. Such testing needs to be comprehensive, easy to perform in a reduced-gravity environment, and unaffected by the stresses of launch and spaceflight. Countless POC devices have been developed to mimic laboratory scale counterparts, but most have narrow applications and few have demonstrable use in an in-flight, reduced-gravity environment. In fact, demonstrations of biomedical diagnostics in reduced gravity are limited altogether, making component choice and certain logistical challenges difficult to approach when seeking to test new technology. To help fill the void, we are presenting a modular method for the construction and operation of a prototype blood diagnostic device and its associated parabolic flight test rig that meet the standards for flight-testing onboard a parabolic flight, reduced-gravity aircraft. The method first focuses on rig assembly for in-flight, reduced-gravity testing of a flow cytometer and a companion microfluidic mixing chip. Components are adaptable to other designs and some custom components, such as a microvolume sample loader and the micromixer may be of particular interest. The method then shifts focus to flight preparation, by offering guidelines and suggestions to prepare for a successful flight test with regard to user training, development of a standard operating procedure (SOP), and other issues. Finally, in-flight experimental procedures specific to our demonstrations are described

    Estudo cefalométrico de alterações induzidas por expansão lenta da maxila em adultos Cephalometric study of alterations induced by maxillary slow expansion in adults

    No full text
    A expansão da maxila é um procedimento que objetiva o aumento do arco dental maxilar para correção de desvios oclusais. Amplamente empregada em crianças, há controvérsias sobre sua eficácia em adultos, quando o crescimento crânio-facial já atingiu sua maturidade óssea. OBJETIVO: O presente estudo tem como objetivo avaliar modificações cefalométricas decorrentes da expansão da maxila em pacientes adultos, observando as seguintes medidas lineares: largura facial, largura nasal, altura nasal, largura maxilar, largura mandibular e largura molar-maxilar. MATERIAL E MÉTODOS: A amostra constituiu-se de 24 telerradiografias frontais, tomadas antes e imediatamente após as expansões, obtidas de 12 pacientes, ambos os sexos, com idade entre 18 anos e dois meses e 37 anos e oito meses. Todos os pacientes foram submetidos à expansão lenta dos ossos maxilares com o uso do aparelho expansor da técnica denominada "reabilitação dinâmica e funcional dos maxilares". Foi utilizado teste estatístico de Wilcoxon pareado, para amostras relacionadas e nível de significância 5%. RESULTADOS: Ocorreu aumento médio de 1,92mm na largura nasal e altura nasal 2,5mm. Nas medidas lineares largura maxilar e largura mandibular o aumento médio foi de 2,42mm e 1,92mm respectivamente. Para largura facial encontrou-se aumento médio de 1,41mm e largura molar-maxilar 2,0mm, sendo tais alterações estatisticamente significativas, obtidas em um tempo médio de 5,3 meses. CONCLUSÃO: Baseado nos resultados obtidos conclui-se que o uso da expansão maxilar induz o aumento das medidas faciais estudadas em adultos.<br>Maxilla expansion is a procedure that aims at increasing the maxillary dental arch to correct occlusal disharmony. Largely used in children, its efficacy in adults, when craniofacial growth has attained bone maturity, is controversial. AIM: The present study has the objective of evaluating cephalometric modifications resulting from maxilla expansion in adult patients, observing the following linear measurements: facial width, nasal width, nasal height, maxillary width, mandibular width and maxillary molar width. MATERIAL E METHODS: The sample was composed of 24 frontal teleradiographs, taken before and immediately after the expansions, from 12 male and female patients aged between 18 years and two months and 37 years and eight months. All patients were submitted to slow expansion of the maxillary bones by means of an appliance used in the technique named "dynamic and functional maxillary rehabilitation". Wilcoxon paired statistical test was used for related samples with a 5% significance level. RESULTS: There was a mean increase of 1.92 mm in nasal width and 2.5 mm in nasal height. As regards the linear measurements maxillary and mandibular width, the mean increase was 2.42 mm and 1.92 mm, respectively. A mean increase of 1.41 mm was found for facial width and 2.0 mm for maxillary molar width, alterations which were statistically significant, the mean time was 5.3 months. CONCLUSION: Based on the results obtained, it may be concluded that the use of maxillary expansion induces increase of the facial measurements studied in adults
    corecore