11 research outputs found
Improving Palliative Care with Deep Learning
Improving the quality of end-of-life care for hospitalized patients is a
priority for healthcare organizations. Studies have shown that physicians tend
to over-estimate prognoses, which in combination with treatment inertia results
in a mismatch between patients wishes and actual care at the end of life. We
describe a method to address this problem using Deep Learning and Electronic
Health Record (EHR) data, which is currently being piloted, with Institutional
Review Board approval, at an academic medical center. The EHR data of admitted
patients are automatically evaluated by an algorithm, which brings patients who
are likely to benefit from palliative care services to the attention of the
Palliative Care team. The algorithm is a Deep Neural Network trained on the EHR
data from previous years, to predict all-cause 3-12 month mortality of patients
as a proxy for patients that could benefit from palliative care. Our
predictions enable the Palliative Care team to take a proactive approach in
reaching out to such patients, rather than relying on referrals from treating
physicians, or conduct time consuming chart reviews of all patients. We also
present a novel interpretation technique which we use to provide explanations
of the model's predictions.Comment: IEEE International Conference on Bioinformatics and Biomedicine 201
Gap-filling eddy covariance methane fluxes : Comparison of machine learning model predictions and uncertainties at FLUXNET-CH4 wetlands
Time series of wetland methane fluxes measured by eddy covariance require gap-filling to estimate daily, seasonal, and annual emissions. Gap-filling methane fluxes is challenging because of high variability and complex responses to multiple drivers. To date, there is no widely established gap-filling standard for wetland methane fluxes, with regards both to the best model algorithms and predictors. This study synthesizes results of different gap-filling methods systematically applied at 17 wetland sites spanning boreal to tropical regions and including all major wetland classes and two rice paddies. Procedures are proposed for: 1) creating realistic artificial gap scenarios, 2) training and evaluating gap-filling models without overstating performance, and 3) predicting halfhourly methane fluxes and annual emissions with realistic uncertainty estimates. Performance is compared between a conventional method (marginal distribution sampling) and four machine learning algorithms. The conventional method achieved similar median performance as the machine learning models but was worse than the best machine learning models and relatively insensitive to predictor choices. Of the machine learning models, decision tree algorithms performed the best in cross-validation experiments, even with a baseline predictor set, and artificial neural networks showed comparable performance when using all predictors. Soil temperature was frequently the most important predictor whilst water table depth was important at sites with substantial water table fluctuations, highlighting the value of data on wetland soil conditions. Raw gap-filling uncertainties from the machine learning models were underestimated and we propose a method to calibrate uncertainties to observations. The python code for model development, evaluation, and uncertainty estimation is publicly available. This study outlines a modular and robust machine learning workflow and makes recommendations for, and evaluates an improved baseline of, methane gap-filling models that can be implemented in multi-site syntheses or standardized products from regional and global flux networks (e.g., FLUXNET).Peer reviewe
Recommended from our members
Automated and flexible identification of complex disease: building a model for systemic lupus erythematosus using noisy labeling.
Accurate and efficient identification of complex chronic conditions in the electronic health record (EHR) is an important but challenging task that has historically relied on tedious clinician review and oversimplification of the disease. Here we adapt methods that allow for automated "noisy labeling" of positive and negative controls to create a "silver standard" for machine learning to automate identification of systemic lupus erythematosus (SLE). Our final model, which includes both structured data as well as text processing of clinical notes, outperformed all existing algorithms for SLE (AUC 0.97). In addition, we demonstrate how the probabilistic outputs of this model can be adapted to various clinical needs, selecting high thresholds when specificity is the priority and lower thresholds when a more inclusive patient population is desired. Deploying a similar methodology to other complex diseases has the potential to dramatically simplify the landscape of population identification in the EHR.Mesh termsElectronic Health Records, Machine Learning, Lupus Erythematosus, Phenotype, Algorithms