1 research outputs found

    The Analysis of Acute Stroke Clinical Trials with Responder Analysis Outcomes

    Get PDF
    Traditionally in acute stroke clinical trials, the primary outcome has been a dichotomized modified Rankin Scale (mRS). The mRS is a 7-point ordinal scale indicating a patient\u27s level of disability following a stroke. Traditional analyses have used a fixed dichotomization scheme, which dichotomizes \u27success\u27 as an mRS of 0-1 or 0-2. This method fails to address the concern that stroke severity may impact the likelihood of a successful outcome; subjects with mild strokes may achieve the defined threshold for success more easily than subjects with severe strokes. Consequently, subjects are unable to contribute equally to the estimation of treatment effect. Stroke studies are increasingly turning to new statistical methods that make more efficient use of available data, including responder analysis. Responder analysis, also known as the sliding dichotomy, allows the definition of success to vary according to baseline severity. This method puts patients on a more level playing field, producing a more clinically relevant insight into the actual effect of investigational stroke treatments. It is unclear whether or not statistical analyses should adjust for baseline severity when responder analysis is used, as the outcome already takes into account baseline severity. Through the use of simulations, this research compares the operating characteristics of unadjusted and adjusted analyses in the responder analysis scheme. We also compare the treatment effect estimates and their standard errors between methods. Under various treatment effect settings, the operating characteristics of the unadjusted and adjusted analyses do not appear to differ substantially. Power and type I error were preserved for both the unadjusted and adjusted analyses. Our results suggest that, under the given treatment effect scenarios, the decision whether or not to adjust for baseline severity should be guided by the needs of the study rather than a strict guideline, as type I error rates and power do not appear to vary largely between the methods
    corecore