2,230 research outputs found

    Exploring the Implications of Analysing Time-to-Event Outcomes as Binary in Meta-analysis

    Get PDF
    Systematic reviews and meta-analysis of time-to-event outcomes can be analysed on the hazard ratio (HR) scale but are very often dichotomised and analysed as binary using effect measures such as odds ratios (OR). This thesis investigates the impact of using these different scales by re-analysing meta-analyses from the Cochrane Database of Systematic Reviews (CDSR), using individual participant data (IPD) and a comprehensive simulation study. For the CDSR and IPD, the pooled HR estimates were closer to 1 than the OR estimates in most meta-analyses. Important differences in between-study heterogeneity between the HR and OR analyses were observed. These caused discrepant conclusions between the OR and HR scales in some meta-analyses. Situations under which the clog-log link outperformed the logit link and vice versa were apparent, indicating that the correct method choice does matter. Differences between scales occurred mainly when event probability was high and could occur via differences in between-study heterogeneity or via increased within-study standard error in OR relative to HR analyses. In many simulation scenarios, analysing time-to-event data as binary using the logit link did not substantially affect bias and coverage apart from those where large percentage random censoring and long follow-up time was present. The method though lacks precision particularly for small meta-analyses. Analysing the data as binary using the clog-log link consistently produced more bias, low coverage and low power. If a HR estimate cannot be obtained per trial to perform a meta-analysis of time-to-event data, a meta-analysis using the OR scale (using the logit link) could be conducted but with awareness that this would provide less precise estimates in the analysis. Investigators should avoid performing meta-analyses on the OR scale in the presence of high event probability, large percentage random censoring and therefore longer follow-up times assuming of large event rates of the trials included

    Optimal cutoff points for classification in diagnostic studies: new contributions and software development

    Get PDF
    Continuous diagnostic tests (biomarkers or risk markers) are often used to discriminate between healthy and diseased populations. For the clinical application of such tests, the key aspect is how to select an appropriate cutpoint or discrimination value c that defines positive and negative test results. In general, individuals with a diagnostic test value smaller than c are classified as healthy and otherwise as diseased. In the literature, several methods have been proposed to select the threshold value c in terms of different specific criteria of optimality. Among others, one of the methods most used in clinical practice is the Symmetry point that maximizes simultaneously both types of correct classifications. From a graphical viewpoint, the Symmetry point is associated to the operating point on the Receiver Operating Characteristic (ROC) curve that intersects the diagonal line passing through the points (0,1) and (1,0). However, this cutpoint is actually valid only when the error of misclassifying a diseased patient has the same severity than the error of misclassifying a healthy patient. Since this may not be the case in practice, an important issue in order to assess the clinical effectiveness of a biomarker is to take into account the costs associated with the decisions taken when selecting the threshold value. Moreover, to facilitate the task of selecting the optimal cut-off point in clinical practice, it is essential to have software that implements the existing optimal criteria in an user-friendly environment. Another interesting issue appears when the marker shows an irregular distribution, with a dominance of diseased subjects in noncontiguous regions. Using a single cutpoint, as common practice in traditional ROC analysis, would not be appropriate for these scenarios because it would lead to erroneous conclusions, not taking full advantage of the intrinsic classificatory capacity of the marke

    Nonparametric Simultaneous Confidence Intervals for Multiple Comparisons of Correlated Areas Under the ROC Curves

    Get PDF
    The performance of a medical diagnostic test yielding quantitative or ordinal measure­ ments is often assessed in terms of its AUC, area under the receiver operating characteristic curve. As new tests constantly being developed, an essential task is to compare multiple AUCs, commonly derived from the same group of subjects. For this purpose, previous research usually uses an omnibus chi-square test that is non-informative and lacks power. In this study, wc propose new methods of constructing simultaneous confidence intervals based on theory of nonparametric [/-statistics. To improve the small sample properties, we adapt the method of variance estimates recovery by obtaining confidence limits for each AUC based on logit and inverse sinh transformation. A large simulation study demon­ strates the good performance of our new method

    Cost-effectiveness of riociguat and bosentan for the treatment of chronic thromboembolic pulmonary hypertension

    Get PDF
    OBJECTIVE: To conduct a cost-effectiveness analysis of riociguat and bosentan in the management of chronic thromboembolic pulmonary hypertension (CTEPH) from a United States payer perspective. METHODS: A Markov model was developed following the recommendations of the International Society of Pharmacoeconomics and Outcomes Research - Society for Medical Decision Making Modeling Good Research Practices. A cohort of patients with inoperable CTEPH or post-pulmonary endarterectomy CTEPH were simulated over their lifetime. Health outcomes were measured as quality-adjusted life years (QALY). Efficacy and safety data were obtained from BENEFiT and CHEST-1 trials. Drugs costs, associated costs for the management of CTEPH, were obtained from Redbook and published information such as the Healthcare Cost and Utilization Project (HCUPnet) and Centers for Medicare & Medicaid Services Physician Fee Schedule. Deterministic and probabilistic sensitivity analyses were performed to assess the robustness of the model projections. RESULTS: Riociguat was more effective than bosentan with an incremental cost of 132,065andanincrementalqualityadjustedlifeyear(QALY)of0.20,correspondingtoanincrementalcosteffectivenessratio(ICER)of132,065 and an incremental quality-adjusted life year (QALY) of 0.20, corresponding to an incremental cost-effectiveness ratio (ICER) of -649,380 per QALY (in favor of riociguat). Riociguat had a lower total discounted lifetime cost compared to bosentan (2,307,488versus2,307,488 versus 2,439,555). Probabilistic sensitivity analyses confirmed dominance of riociguat in 74% of the Monte Carlo simulations. CONCLUSIONS: Results of this model indicates that riociguat is more effective and less costly than bosentan in the management of patients with inoperable CTEPH or post-pulmonary endarterectomy CTEPH

    A simulation study of the effect of therapeutic horseback riding : a logistic regression approach

    Get PDF
    Therapeutic horseback riding (THR) uses the horse as a therapeutic apparatus in physical and psychological therapy. This dissertation suggests a more appropriate technique for measuring the effect of THR. A research survey of the statistical methods used to determine the effect of THR was undertaken. Although researchers observed clinically meaningful change in several of the studies, this was not supported by statistical tests. A logistic regression approach is proposed as a solution to many of the problems experienced by researchers on THR. Since large THR related data sets are not available, data were simulated. Logistic regression and t-tests were used to analyse the same simulated data sets, and the results were compared. The advantages of the logistic regression approach are discussed. This statistical technique can be applied in any field where the therapeutic value of an intervention has to be proven scientifically.Mathematical SciencesM. Sc. (Statistics

    Going backwards? The slowdown, stalling and reversal of progress in reducing child poverty in Britain during the second decade of the 21st century, and the groups of children that were affected.

    Get PDF
    This paper has been written as part of the Social Policies and Distributional Outcomes (SPDO) research programme and provides an in-depth assessment of the slowdown, stalling and reversal of progress in reducing child poverty during the second decade of the 21st century and how this affected children from different social groups. The paper provides a more comprehensive and detailed body of evidence on patterns and trends in child poverty during the 2010s by social group than has previously been available. We build-up granular evidence on patterns and trends in child poverty by social group with systematic disaggregation by a wide range of characteristics including characteristics that are protected in equalities legislation (age, gender, disability and ethnicity) and additional characteristics that are important for equalities and human rights monitoring purposes (young carer status, country of birth, lone parent status, number of dependent children, geographical area and household socio-economic classification, employment status and tenure). All of our estimates (both cross-sectional and changes over time) are accompanied by detailed assessments of statistical significance. Additionally, we use multivariate as well as descriptive methods, which enables us to assess the independent associations between child poverty and different markers of child disadvantage at the beginning of the second decade of the 21st century and how these changed during the 2010s. Overall, the analysis shows that the slowdown, stalling and reversal of progress in reducing child poverty during the second decade of the 21st century impacted on children from many different social groups. However, it is of particular concern that some of the groups that were already the most disadvantaged at the beginning of the 2010s were disproportionally impacted with further increases in their child poverty risks and a widening of prevalence gaps with more advantaged comparator groups. Multivariate analysis shows that the independent associations between child poverty and some of the key markers of disadvantage and risk that we are concerned with in this study also strengthened during the 2010s. The evidence we present raises fundamental questions about retrogression in social outcomes in the second decade of the 21st century, the impact of underlying changes in social policies and social protection, the failure to protect vulnerable groups during a period of fiscal adjustment, austerity and welfare reform, and underlying issues of social justice and human rights. The findings also underline the case for the adoption of a new cross-governmental child poverty strategy for the 2020s

    Meta-analytic approaches for summarising and comparing the accuracy of medical tests

    Get PDF
    Medical tests are essential for patient care. Evidence-based assessment of the relative accuracy of competing diagnostic tests informs clinical and policy decision making. This thesis addresses questions centred on assessing the reliability and transparency of evidence from systematic reviews and meta-analyses of comparative test accuracy, including validity of meta-analytic methods. Case studies were used to highlight key methodological issues, and provided rationale and context for the thesis. Published systematic reviews of multiple tests were identified and used to provide a descriptive survey of recent practice. Availability of comparative accuracy studies and differences between meta-analyses of direct (head-to-head) and indirect (between-study) comparisons were assessed. Comparative meta-analysis methods were reviewed and those deemed statistically robust were empirically evaluated. Using simulation, performance of hierarchical methods for meta-analysis of a single test was investigated in challenging scenarios (e.g. few studies or sparse data) and implications for test comparisons were considered. Poor statistical methods and incomplete reporting threatens the reliability of comparative reviews. Differences exist between direct and indirect comparisons but direct comparisons were seldom feasible because comparative studies were unavailable. Furthermore, inappropriate use of meta-analytic methods generated misleading results and conclusions. Therefore, recommendations for use of valid methods and a reporting checklist were developed
    corecore