1,913 research outputs found

    Personalized hypertension treatment recommendations by a data-driven model

    Get PDF
    BACKGROUND: Hypertension is a prevalent cardiovascular disease with severe longer-term implications. Conventional management based on clinical guidelines does not facilitate personalized treatment that accounts for a richer set of patient characteristics. METHODS: Records from 1/1/2012 to 1/1/2020 at the Boston Medical Center were used, selecting patients with either a hypertension diagnosis or meeting diagnostic criteria (≄ 130 mmHg systolic or ≄ 90 mmHg diastolic, n = 42,752). Models were developed to recommend a class of antihypertensive medications for each patient based on their characteristics. Regression immunized against outliers was combined with a nearest neighbor approach to associate with each patient an affinity group of other patients. This group was then used to make predictions of future Systolic Blood Pressure (SBP) under each prescription type. For each patient, we leveraged these predictions to select the class of medication that minimized their future predicted SBP. RESULTS: The proposed model, built with a distributionally robust learning procedure, leads to a reduction of 14.28 mmHg in SBP, on average. This reduction is 70.30% larger than the reduction achieved by the standard-of-care and 7.08% better than the corresponding reduction achieved by the 2nd best model which uses ordinary least squares regression. All derived models outperform following the previous prescription or the current ground truth prescription in the record. We randomly sampled and manually reviewed 350 patient records; 87.71% of these model-generated prescription recommendations passed a sanity check by clinicians. CONCLUSION: Our data-driven approach for personalized hypertension treatment yielded significant improvement compared to the standard-of-care. The model implied potential benefits of computationally deprescribing and can support situations with clinical equipoise.GM135930 - National Institute of General Medical Sciences; UL54 TR004130 - National Center for Advancing Translational Sciences; IIS-1914792 - National Science Foundation; DMS-1664644 - National Science Foundation; CCF-2200052 - National Science FoundationPublished versio

    Estimating Trustworthy and Safe Optimal Treatment Regimes

    Full text link
    Recent statistical and reinforcement learning methods have significantly advanced patient care strategies. However, these approaches face substantial challenges in high-stakes contexts, including missing data, inherent stochasticity, and the critical requirements for interpretability and patient safety. Our work operationalizes a safe and interpretable framework to identify optimal treatment regimes. This approach involves matching patients with similar medical and pharmacological characteristics, allowing us to construct an optimal policy via interpolation. We perform a comprehensive simulation study to demonstrate the framework's ability to identify optimal policies even in complex settings. Ultimately, we operationalize our approach to study regimes for treating seizures in critically ill patients. Our findings strongly support personalized treatment strategies based on a patient's medical history and pharmacological features. Notably, we identify that reducing medication doses for patients with mild and brief seizure episodes while adopting aggressive treatment for patients in intensive care unit experiencing intense seizures leads to more favorable outcomes

    A Survey of Multimodal Information Fusion for Smart Healthcare: Mapping the Journey from Data to Wisdom

    Full text link
    Multimodal medical data fusion has emerged as a transformative approach in smart healthcare, enabling a comprehensive understanding of patient health and personalized treatment plans. In this paper, a journey from data to information to knowledge to wisdom (DIKW) is explored through multimodal fusion for smart healthcare. We present a comprehensive review of multimodal medical data fusion focused on the integration of various data modalities. The review explores different approaches such as feature selection, rule-based systems, machine learning, deep learning, and natural language processing, for fusing and analyzing multimodal data. This paper also highlights the challenges associated with multimodal fusion in healthcare. By synthesizing the reviewed frameworks and theories, it proposes a generic framework for multimodal medical data fusion that aligns with the DIKW model. Moreover, it discusses future directions related to the four pillars of healthcare: Predictive, Preventive, Personalized, and Participatory approaches. The components of the comprehensive survey presented in this paper form the foundation for more successful implementation of multimodal fusion in smart healthcare. Our findings can guide researchers and practitioners in leveraging the power of multimodal fusion with the state-of-the-art approaches to revolutionize healthcare and improve patient outcomes.Comment: This work has been submitted to the ELSEVIER for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Evaluating Treatment Prioritization Rules via Rank-Weighted Average Treatment Effects

    Full text link
    There are a number of available methods that can be used for choosing whom to prioritize treatment, including ones based on treatment effect estimation, risk scoring, and hand-crafted rules. We propose rank-weighted average treatment effect (RATE) metrics as a simple and general family of metrics for comparing treatment prioritization rules on a level playing field. RATEs are agnostic as to how the prioritization rules were derived, and only assesses them based on how well they succeed in identifying units that benefit the most from treatment. We define a family of RATE estimators and prove a central limit theorem that enables asymptotically exact inference in a wide variety of randomized and observational study settings. We provide justification for the use of bootstrapped confidence intervals and a framework for testing hypotheses about heterogeneity in treatment effectiveness correlated with the prioritization rule. Our definition of the RATE nests a number of existing metrics, including the Qini coefficient, and our analysis directly yields inference methods for these metrics. We demonstrate our approach in examples drawn from both personalized medicine and marketing. In the medical setting, using data from the SPRINT and ACCORD-BP randomized control trials, we find no significant evidence of heterogeneous treatment effects. On the other hand, in a large marketing trial, we find robust evidence of heterogeneity in the treatment effects of some digital advertising campaigns and demonstrate how RATEs can be used to compare targeting rules that prioritize estimated risk vs. those that prioritize estimated treatment benefit

    Application of Machine Learning in Healthcare and Medicine: A Review

    Get PDF
    This extensive literature review investigates the integration of Machine Learning (ML) into the healthcare sector, uncovering its potential, challenges, and strategic resolutions. The main objective is to comprehensively explore how ML is incorporated into medical practices, demonstrate its impact, and provide relevant solutions. The research motivation stems from the necessity to comprehend the convergence of ML and healthcare services, given its intricate implications. Through meticulous analysis of existing research, this method elucidates the broad spectrum of ML applications in disease prediction and personalized treatment. The research's precision lies in dissecting methodologies, scrutinizing studies, and extrapolating critical insights. The article establishes that ML has succeeded in various aspects of medical care. In certain studies, ML algorithms, especially Convolutional Neural Networks (CNNs), have achieved high accuracy in diagnosing diseases such as lung cancer, colorectal cancer, brain tumors, and breast tumors. Apart from CNNs, other algorithms like SVM, RF, k-NN, and DT have also proven effective. Evaluations based on accuracy and F1-score indicate satisfactory results, with some studies exceeding 90% accuracy. This principal finding underscores the impressive accuracy of ML algorithms in diagnosing diverse medical conditions. This outcome signifies the transformative potential of ML in reshaping conventional diagnostic techniques. Discussions revolve around challenges like data quality, security risks, potential misinterpretations, and obstacles in integrating ML into clinical realms. To mitigate these, multifaceted solutions are proposed, encompassing standardized data formats, robust encryption, model interpretation, clinician training, and stakeholder collaboration
    • 

    corecore