16 research outputs found

    Mechanisms and management of loss of response to anti-TNF therapy for patients with Crohn's disease:3-year data from the prospective, multicentre PANTS cohort study

    Get PDF
    Background: We sought to report the effectiveness of infliximab and adalimumab over the first 3 years of treatment and to define the factors that predict anti-TNF treatment failure and the strategies that prevent or mitigate loss of response. Methods: Personalised Anti-TNF therapy in Crohn's disease (PANTS) is a UK-wide, multicentre, prospective observational cohort study reporting the rates of effectiveness of infliximab and adalimumab in anti-TNF-naive patients with active luminal Crohn's disease aged 6 years and older. At the end of the first year, sites were invited to enrol participants still receiving study drug into the 2-year PANTS-extension study. We estimated rates of remission across the whole cohort at the end of years 1, 2, and 3 of the study using a modified survival technique with permutation testing. Multivariable regression and survival analyses were used to identify factors associated with loss of response in patients who had initially responded to anti-TNF therapy and with immunogenicity. Loss of response was defined in patients who initially responded to anti-TNF therapy at the end of induction and who subsequently developed symptomatic activity that warranted an escalation of steroid, immunomodulatory, or anti-TNF therapy, resectional surgery, or exit from study due to treatment failure. This study was registered with ClinicalTrials.gov, NCT03088449, and is now complete. Findings: Between March 19, 2014, and Sept 21, 2017, 389 (41%) of 955 patients treated with infliximab and 209 (32%) of 655 treated with adalimumab in the PANTS study entered the PANTS-extension study (median age 32·5 years [IQR 22·1–46·8], 307 [51%] of 598 were female, and 291 [49%] were male). The estimated proportion of patients in remission at the end of years 1, 2, and 3 were, for infliximab 40·2% (95% CI 36·7–43·7), 34·4% (29·9–39·0), and 34·7% (29·8–39·5), and for adalimumab 35·9% (95% CI 31·2–40·5), 32·9% (26·8–39·2), and 28·9% (21·9–36·3), respectively. Optimal drug concentrations at week 14 to predict remission at any later timepoints were 6·1–10·0 mg/L for infliximab and 10·1–12·0 mg/L for adalimumab. After excluding patients who had primary non-response, the estimated proportions of patients who had loss of response by years 1, 2, and 3 were, for infliximab 34·4% (95% CI 30·4–38·2), 54·5% (49·4–59·0), and 60·0% (54·1–65·2), and for adalimumab 32·1% (26·7–37·1), 47·2% (40·2–53·4), and 68·4% (50·9–79·7), respectively. In multivariable analysis, loss of response at year 2 and 3 for patients treated with infliximab and adalimumab was predicted by low anti-TNF drug concentrations at week 14 (infliximab: hazard ratio [HR] for each ten-fold increase in drug concentration 0·45 [95% CI 0·30–0·67], adalimumab: 0·39 [0·22–0·70]). For patients treated with infliximab, loss of response was also associated with female sex (vs male sex; HR 1·47 [95% CI 1·11–1·95]), obesity (vs not obese 1·62 [1·08–2·42]), baseline white cell count (1·06 [1·02–1·11) per 1 × 109 increase in cells per L), and thiopurine dose quartile. Among patients treated with adalimumab, carriage of the HLA-DQA1*05 risk variant was associated with loss of response (HR 1·95 [95% CI 1·17–3·25]). By the end of year 3, the estimated proportion of patients who developed anti-drug antibodies associated with undetectable drug concentrations was 44·0% (95% CI 38·1–49·4) among patients treated with infliximab and 20·3% (13·8–26·2) among those treated with adalimumab. The development of anti-drug antibodies associated with undetectable drug concentrations was significantly associated with treatment without concomitant immunomodulator use for both groups (HR for immunomodulator use: infliximab 0·40 [95% CI 0·31–0·52], adalimumab 0·42 [95% CI 0·24–0·75]), and with carriage of HLA-DQA1*05 risk variant for infliximab (HR for carriage of risk variant: infliximab 1·46 [1·13–1·88]) but not for adalimumab (HR 1·60 [0·92–2·77]). Concomitant use of an immunomodulator before or on the day of starting infliximab was associated with increased time without the development of anti-drug antibodies associated with undetectable drug concentrations compared with use of infliximab alone (HR 2·87 [95% CI 2·20–3·74]) or introduction of an immunomodulator after anti-TNF initiation (1·70 [1·11–2·59]). In years 2 and 3, 16 (4%) of 389 patients treated with infliximab and 11 (5%) of 209 treated with adalimumab had adverse events leading to treatment withdrawal. Nine (2%) patients treated with infliximab and two (1%) of those treated with adalimumab had serious infections in years 2 and 3. Interpretation: Only around a third of patients with active luminal Crohn's disease treated with an anti-TNF drug were in remission at the end of 3 years of treatment. Low drug concentrations at the end of the induction period predict loss of response by year 3 of treatment, suggesting higher drug concentrations during the first year of treatment, particularly during induction, might lead to better long-term outcomes. Anti-drug antibodies associated with undetectable drug concentrations of infliximab, but not adalimumab, can be predicted by carriage of HLA-DQA1*05 and mitigated by concomitant immunomodulator use for both drugs. Funding: Guts UK, Crohn's and Colitis UK, Cure Crohn's Colitis, AbbVie, Merck Sharp and Dohme, Napp Pharmaceuticals, Pfizer, and Celltrion Healthcare.</p

    Development and validation of machine learning models to predict gastrointestinal leak and venous thromboembolism after weight loss surgery: an analysis of the MBSAQIP database.

    Get PDF
    BackgroundPostoperative gastrointestinal leak and venous thromboembolism (VTE) are devastating complications of bariatric surgery. The performance of currently available predictive models for these complications remains wanting, while machine learning has shown promise to improve on traditional modeling approaches. The purpose of this study was to compare the ability of two machine learning strategies, artificial neural networks (ANNs), and gradient boosting machines (XGBs) to conventional models using logistic regression (LR) in predicting leak and VTE after bariatric surgery.MethodsANN, XGB, and LR prediction models for leak and VTE among adults undergoing initial elective weight loss surgery were trained and validated using preoperative data from 2015 to 2017 from Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program database. Data were randomly split into training, validation, and testing populations. Model performance was measured by the area under the receiver operating characteristic curve (AUC) on the testing data for each model.ResultsThe study cohort contained 436,807 patients. The incidences of leak and VTE were 0.70% and 0.46%. ANN (AUC 0.75, 95% CI 0.73-0.78) was the best-performing model for predicting leak, followed by XGB (AUC 0.70, 95% CI 0.68-0.72) and then LR (AUC 0.63, 95% CI 0.61-0.65, p &lt; 0.001 for all comparisons). In detecting VTE, ANN, and XGB, LR achieved similar AUCs of 0.65 (95% CI 0.63-0.68), 0.67 (95% CI 0.64-0.70), and 0.64 (95% CI 0.61-0.66), respectively; the performance difference between XGB and LR was statistically significant (p = 0.001).ConclusionsANN and XGB outperformed traditional LR in predicting leak. These results suggest that ML has the potential to improve risk stratification for bariatric surgery, especially as techniques to extract more granular data from medical records improve. Further studies investigating the merits of machine learning to improve patient selection and risk management in bariatric surgery are warranted

    Gender-based time discrepancy in diagnosis of coronary artery disease based on data analytics of electronic medical records.

    Full text link
    BackgroundWomen continue to have worse Coronary Artery Disease (CAD) outcomes than men. The causes of this discrepancy have yet to be fully elucidated. The main objective of this study is to detect gender discrepancies in the diagnosis and treatment of CAD.MethodsWe used data analytics to risk stratify ~32,000 patients with CAD of the total 960,129 patients treated at the UCSF Medical Center over an 8 year period. We implemented a multidimensional data analytics framework to trace patients from admission through treatment to create a path of events. Events are any medications or noninvasive and invasive procedures. The time between events for a similar set of paths was calculated. Then, the average waiting time for each step of the treatment was calculated. Finally, we applied statistical analysis to determine differences in time between diagnosis and treatment steps for men and women.ResultsThere is a significant time difference from the first time of admission to diagnostic Cardiac Catheterization between genders (p-value = 0.000119), while the time difference from diagnostic Cardiac Catheterization to CABG is not statistically significant.ConclusionWomen had a significantly longer interval between their first physician encounter indicative of CAD and their first diagnostic cardiac catheterization compared to men. Avoiding this delay in diagnosis may provide more timely treatment and a better outcome for patients at risk. Finally, we conclude by discussing the impact of the study on improving patient care with early detection and managing individual patients at risk of rapid progression of CAD

    Postoperative delirium prediction using machine learning models and preoperative electronic health record data.

    Full text link
    BackgroundAccurate, pragmatic risk stratification for postoperative delirium (POD) is necessary to target preventative resources toward high-risk patients. Machine learning (ML) offers a novel approach to leveraging electronic health record (EHR) data for POD prediction. We sought to develop and internally validate a ML-derived POD risk prediction model using preoperative risk features, and to compare its performance to models developed with traditional logistic regression.MethodsThis was a retrospective analysis of preoperative EHR data from 24,885 adults undergoing a procedure requiring anesthesia care, recovering in the main post-anesthesia care unit, and staying in the hospital at least overnight between December 2016 and December 2019 at either of two hospitals in a tertiary care health system. One hundred fifteen preoperative risk features including demographics, comorbidities, nursing assessments, surgery type, and other preoperative EHR data were used to predict postoperative delirium (POD), defined as any instance of Nursing Delirium Screening Scale ≥2 or positive Confusion Assessment Method for the Intensive Care Unit within the first 7 postoperative days. Two ML models (Neural Network and XGBoost), two traditional logistic regression models ("clinician-guided" and "ML hybrid"), and a previously described delirium risk stratification tool (AWOL-S) were evaluated using the area under the receiver operating characteristic curve (AUC-ROC), sensitivity, specificity, positive likelihood ratio, and positive predictive value. Model calibration was assessed with a calibration curve. Patients with no POD assessments charted or at least 20% of input variables missing were excluded.ResultsPOD incidence was 5.3%. The AUC-ROC for Neural Net was 0.841 [95% CI 0. 816-0.863] and for XGBoost was 0.851 [95% CI 0.827-0.874], which was significantly better than the clinician-guided (AUC-ROC 0.763 [0.734-0.793], p &lt; 0.001) and ML hybrid (AUC-ROC 0.824 [0.800-0.849], p &lt; 0.001) regression models and AWOL-S (AUC-ROC 0.762 [95% CI 0.713-0.812], p &lt; 0.001). Neural Net, XGBoost, and ML hybrid models demonstrated excellent calibration, while calibration of the clinician-guided and AWOL-S models was moderate; they tended to overestimate delirium risk in those already at highest risk.ConclusionUsing pragmatically collected EHR data, two ML models predicted POD in a broad perioperative population with high discrimination. Optimal application of the models would provide automated, real-time delirium risk stratification to improve perioperative management of surgical patients at risk for POD

    Machine Learning Prediction of Liver Allograft Utilization From Deceased Organ Donors Using the National Donor Management Goals Registry.

    Full text link
    Early prediction of whether a liver allograft will be utilized for transplantation may allow better resource deployment during donor management and improve organ allocation. The national donor management goals (DMG) registry contains critical care data collected during donor management. We developed a machine learning model to predict transplantation of a liver graft based on data from the DMG registry.MethodsSeveral machine learning classifiers were trained to predict transplantation of a liver graft. We utilized 127 variables available in the DMG dataset. We included data from potential deceased organ donors between April 2012 and January 2019. The outcome was defined as liver recovery for transplantation in the operating room. The prediction was made based on data available 12-18 h after the time of authorization for transplantation. The data were randomly separated into training (60%), validation (20%), and test sets (20%). We compared the performance of our models to the Liver Discard Risk Index.ResultsOf 13 629 donors in the dataset, 9255 (68%) livers were recovered and transplanted, 1519 recovered but used for research or discarded, 2855 were not recovered. The optimized gradient boosting machine classifier achieved an area under the curve of the receiver operator characteristic of 0.84 on the test set, outperforming all other classifiers.ConclusionsThis model predicts successful liver recovery for transplantation in the operating room, using data available early during donor management. It performs favorably when compared to existing models. It may provide real-time decision support during organ donor management and transplant logistics

    Reduced-gravity Environment Hardware Demonstrations of a Prototype Miniaturized Flow Cytometer and Companion Microfluidic Mixing Technology

    Full text link
    Until recently, astronaut blood samples were collected in-flight, transported to earth on the Space Shuttle, and analyzed in terrestrial laboratories. If humans are to travel beyond low Earth orbit, a transition towards space-ready, point-of-care (POC) testing is required. Such testing needs to be comprehensive, easy to perform in a reduced-gravity environment, and unaffected by the stresses of launch and spaceflight. Countless POC devices have been developed to mimic laboratory scale counterparts, but most have narrow applications and few have demonstrable use in an in-flight, reduced-gravity environment. In fact, demonstrations of biomedical diagnostics in reduced gravity are limited altogether, making component choice and certain logistical challenges difficult to approach when seeking to test new technology. To help fill the void, we are presenting a modular method for the construction and operation of a prototype blood diagnostic device and its associated parabolic flight test rig that meet the standards for flight-testing onboard a parabolic flight, reduced-gravity aircraft. The method first focuses on rig assembly for in-flight, reduced-gravity testing of a flow cytometer and a companion microfluidic mixing chip. Components are adaptable to other designs and some custom components, such as a microvolume sample loader and the micromixer may be of particular interest. The method then shifts focus to flight preparation, by offering guidelines and suggestions to prepare for a successful flight test with regard to user training, development of a standard operating procedure (SOP), and other issues. Finally, in-flight experimental procedures specific to our demonstrations are described

    Human-specific NOTCH2NL genes affect notch signaling and cortical neurogenesis

    Full text link
    Genetic changes causing brain size expansion in human evolution have remained elusive. Notch signaling is essential for radial glia stem cell proliferation and is a determinant of neuronal number in the mammalian cortex. We find that three paralogs of human-specific NOTCH2NL are highly expressed in radial glia. Functional analysis reveals that different alleles of NOTCH2NL have varying potencies to enhance Notch signaling by interacting directly with NOTCH receptors. Consistent with a role in Notch signaling, NOTCH2NL ectopic expression delays differentiation of neuronal progenitors, while deletion accelerates differentiation into cortical neurons. Furthermore, NOTCH2NL genes provide the breakpoints in 1q21.1 distal deletion/duplication syndrome, where duplications are associated with macrocephaly and autism and deletions with microcephaly and schizophrenia. Thus, the emergence of human-specific NOTCH2NL genes may have contributed to the rapid evolution of the larger human neocortex, accompanied by loss of genomic stability at the 1q21.1 locus and resulting recurrent neurodevelopmental disorders
    corecore