64 research outputs found

    Validation of a Visual Field Prediction Tool for Glaucoma: A Multicenter Study Involving Patients With Glaucoma in the United Kingdom

    Get PDF
    PURPOSE: A previously developed machine-learning approach with Kalman filtering technology accurately predicted the disease trajectory for patients with various glaucoma types and severities using clinical trial data. This study assesses performance of the KF approach with real-world data. DESIGN: Retrospective cohort study. METHODS: We tested the performance of a previously validated KF model (PKF) initially trained using data from the African Descent and Glaucoma Evaluation Study and the Diagnostic Innovations in Glaucoma Study in patients with different types and severities of glaucoma receiving care in the United Kingdom (UK), comparing the predictive accuracy to 2 conventional linear regression (LR) models and a newly developed KF trained on UK patients (UK-KF). RESULTS: A total of 3116 patients with open-angle glaucoma or suspects were divided into training (n=1584) and testing (n=1532) sets. The predictive accuracy for MD within 2.5 dB of the observed value at 60 months’ follow-up for PKF (75.7%) was substantially better than those for the LR models (P < .01 for both) and similar to that for UK-KF (75.2%, P = .70). The proportion of MD predictions in the 95% repeatability intervals at 60 months’ follow-up for PKF (67.9%) was higher than those for the LR models (40.2%, 40.9%) and similar to that for UK-KF (71.4%). CONCLUSIONS: This study validates the performance of our previously developed KF model on a real-world, multicenter patient population. Our model substantially outperforms the current clinical standard (LR) and forecasts well for patients with different glaucoma types and severities. This study supports the generalizability of PKF performance and supports prospective study of implementation into clinical practice

    Data-Driven Learning and Resource Allocation in Healthcare Operations Management

    Full text link
    The tremendous advances in machine learning and optimization over the past decade have immensely increased the opportunity to personalize and improve decisions for a plethora of problems in healthcare. This brings forward several challenges and opportunities that have been the primary motivation behind this dissertation and its contributions in both practical and theoretical aspects. This dissertation is broadly about sequential decision-making and statistical learning under limited resources. In this area, we treat sequentially arriving individuals, each of which should be assigned to the most appropriate resource. Per each arrival, the decision-maker receives some contextual information, chooses an action, and gains noisy feedback corresponding to the action. The aim is to minimize the regret of choosing sub-optimal actions over a time horizon. We provide data-driven and personalized methodologies for this class of problems. Our data-driven methods adaptively learn from data over time to make efficient and effective real-time decisions for each individual, when resources are limited. With a particular focus on high-impact problems in healthcare, we develop new online algorithms to solve healthcare operations problems. The theoretical contributions lie in the design and analysis of a new class of online learning algorithms for sequential decision-making and proving theoretical performance guarantees for them. The practical contributions are to apply our methodology to solve and provide managerial and practical insights for problems in healthcare, service operations, and operations management in general.PHDIndustrial & Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/174261/1/mzhale_1.pd

    Harmonizing Safety and Speed: A Human-Algorithm Approach to Enhance the FDA's Medical Device Clearance Policy

    Full text link
    The United States Food and Drug Administration's (FDA's) Premarket Notification 510(K) pathway allows manufacturers to gain approval for a medical device by demonstrating its substantial equivalence to another legally marketed device. However, the inherent ambiguity of this regulatory procedure has led to high recall rates for many devices cleared through this pathway. This trend has raised significant concerns regarding the efficacy of the FDA's current approach, prompting a reassessment of the 510(K) regulatory framework. In this paper, we develop a combined human-algorithm approach to assist the FDA in improving its 510(k) medical device clearance process by reducing the risk of potential recalls and the workload imposed on the FDA. We first develop machine learning methods to estimate the risk of recall of 510(k) medical devices based on the information available at the time of submission. We then propose a data-driven clearance policy that recommends acceptance, rejection, or deferral to FDA's committees for in-depth evaluation. We conduct an empirical study using a unique large-scale dataset of over 31,000 medical devices and 12,000 national and international manufacturers from over 65 countries that we assembled based on data sources from the FDA and Centers for Medicare and Medicaid Service (CMS). A conservative evaluation of our proposed policy based on this data shows a 38.9% improvement in the recall rate and a 43.0% reduction in the FDA's workload. Our analyses also indicate that implementing our policy could result in significant annual cost-savings ranging between \$2.4 billion and \$2.7 billion, which highlights the value of using a holistic and data-driven approach to improve the FDA's current 510(K) medical device evaluation pathway
    corecore