75 research outputs found

    Dynamic early identification of hip replacement implants with high revision rates. Study based on the NJR data from UK during 2004-2012

    Get PDF
    BACKGROUND: Hip replacement and hip resurfacing are common surgical procedures with an estimated risk of revision of 4% over 10 year period. Approximately 58% of hip replacements will last 25 years. Some implants have higher revision rates and early identification of poorly performing hip replacement implant brands and cup/head brand combinations is vital. AIMS: Development of a dynamic monitoring method for the revision rates of hip implants. METHODS: Data on the outcomes following the hip replacement surgery between 2004 and 2012 was obtained from the National Joint Register (NJR) in the UK. A novel dynamic algorithm based on the CUmulative SUM (CUSUM) methodology with adjustment for casemix and random frailty for an operating unit was developed and implemented to monitor the revision rates over time. The Benjamini-Hochberg FDR method was used to adjust for multiple testing of numerous hip replacement implant brands and cup/ head combinations at each time point. RESULTS: Three poorly performing cup brands and two cup/ head brand combinations have been detected. Wright Medical UK Ltd Conserve Plus Resurfacing Cup (cup o), DePuy ASR Resurfacing Cup (cup e), and Endo Plus (UK) Limited EP-Fit Plus Polyethylene cup (cup g) showed stable multiple alarms over the period of a year or longer. An addition of a random frailty term did not change the list of underperforming components. The model with added random effect was more conservative, showing less and more delayed alarms. CONCLUSIONS: Our new algorithm is an efficient method for early detection of poorly performing components in hip replacement surgery. It can also be used for similar tasks of dynamic quality monitoring in healthcare

    Evaluating Failure Outcomes with Applications to Transplant Facility Performance.

    Full text link
    We develop several methods to evaluate mortality experience of medical facilities with applications to transplant facilities' post-transplant mortality and pre-transplant waitlist mortality. We aim to compare the center-specific outcomes with the standard practice while providing timely feedback to the centers. In Chapter II, we introduce a risk-adjusted O-E (Observed-Expected) Cumulative Sum (CUSUM) chart along with monitoring bands as decision criterion, to monitor the post-transplant mortality in transplant programs. This can be used in place of a traditional but complicated V-mask and yields a more simply interpreted chart. The resulting plot provides bounds that allow for simultaneous monitoring of failure time outcomes with signals for `worse than expected' or `better than expected'. The plots are easily interpreted in that their slopes provide graphical estimates of relative risks and direct information on additional failures needed to trigger a signal. In Chapter II, we discuss the construction of a weighted CUSUM to evaluate pre-transplant waitlist mortality of facilities where transplantation can be considered as dependent censoring. Patients are evaluated based on their current medical condition as reflected in a time dependent variable the Model for End-Stage Liver Disease score, which is used to prioritize to receive liver transplants. We assume a ‘standard’ transplant practice through a transplant model, utilizing Inverse Probability Censoring Weights (IPCW) to construct a weighted CUSUM. We evaluate the properties of a weighted zero-mean process as the basis of the proposed weighted CUSUM. A rule of setting control limits is discussed. Case study on regional transplant waitlist mortality is carried out to demonstrate the usage of the proposed weighted CUSUM. In Chapter III, we provide an explicit road map for using a Cox dependent censoring model in the IPCW approach, complete with details of implementation. In addition, we evaluate an alternative parametric IPCW approach to gain efficiency. Simulation studies and case study on the national liver transplant waitlist mortality are conducted to demonstrate the similarity in estimates between Cox IPCW and PWE IPCW, and the computational savings by the PWE IPCW as compared to the Cox IPCW. In the last chapter, we discuss the future directions of our work.PHDBiostatisticsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/96101/1/renajsun_1.pd

    Detecting changes in high frequency data streams, with applications

    No full text
    In recent years, problems relating to the analysis of data streams have become widespread. A data stream is a collection of time ordered observations x1, x2, ... generated from the random variables X1, X2, .... It is assumed that the observations are univariate and independent, and that they arrive in discrete time. Unlike traditional sequential analysis problems considered by statisticians, the size of a data stream is not assumed to be fixed, and new observations may be received over time. The rate at which these observations are received can be very high, perhaps several thousand every second. Therefore computational efficiency is very important, and methods used for analysis must be able to cope with potentially huge data sets. This paper is concerned with the task of detecting whether a data stream contains a change point, and extends traditional methods for sequential change detection to the streaming context. We focus on two different settings of the change point problem. The first is nonparametric change detection where, in contrast to most of the existing literature, we assume that nothing is known about either the pre- or post-change stream distribution. The task is then to detect a change from an unknown base distribution F0 to an unknown distribution F1. Further, we impose the constraint that change detection methods must have a bounded rate of false positives, which is important when it comes to assessing the significance of discovered change points. It is this constraint which makes the nonparametric problem difficult. We present several novel methods for this problem, and compare their performance via extensive experimental analysis. The second strand of our research is Bernoulli change detection, with application to streaming classification. In this setting, we assume a parametric form for the stream distribution, but one where both the pre- and post-change parameters are unknown. The task is again to detect changes, while having a control on the rate of false positives. After developing two different methods for tackling the pure Bernoulli change detection task, we then show how our approach can be deployed in streaming classification applications. Here, the goal is to classify objects into one of several categories. In the streaming case, the optimal classification rule can change over time, and classification techniques which are not able to adapt to these changes will suffer performance degradation. We show that by focusing only on the frequency of errors produced by the classifier, we can treat this as a Bernoulli change detection problem, and again perform extensive experimental analysis to show the value of our methods

    Modeling and designing control chart for monitoring time-between events data

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Applications and Experiences of Quality Control

    Get PDF
    The rich palette of topics set out in this book provides a sufficiently broad overview of the developments in the field of quality control. By providing detailed information on various aspects of quality control, this book can serve as a basis for starting interdisciplinary cooperation, which has increasingly become an integral part of scientific and applied research

    Time Series Modelling

    Get PDF
    The analysis and modeling of time series is of the utmost importance in various fields of application. This Special Issue is a collection of articles on a wide range of topics, covering stochastic models for time series as well as methods for their analysis, univariate and multivariate time series, real-valued and discrete-valued time series, applications of time series methods to forecasting and statistical process control, and software implementations of methods and models for time series. The proposed approaches and concepts are thoroughly discussed and illustrated with several real-world data examples

    Monitoring diseases based on register data: Methods and application in the Danish swine production

    Get PDF
    corecore