197 research outputs found

    Power estimations for non-primary outcomes in randomised clinical trials

    Get PDF
    Objective and methods: It is rare that trialists report power estimations of non-primary outcomes. In the present article, we will describe how to define a valid hierarchy of outcomes in a randomised clinical trial, to limit problems with Type I and Type II errors, using considerations on the clinical relevance of the outcomes and power estimations. Conclusion: Power estimations of non-primary outcomes may guide trialists in classifying non-primary outcomes as secondary or exploratory. The power estimations are simple and if they are used systematically, more appropriate outcome hierarchies can be defined, and trial results will become more interpretable

    Pain relief that matters to patients: systematic review of empirical studies assessing the minimum clinically important difference in acute pain

    Get PDF
    BACKGROUND: The minimum clinically important difference (MCID) is used to interpret the clinical relevance of results reported by trials and meta-analyses as well as to plan sample sizes in new studies. However, there is a lack of consensus about the size of MCID in acute pain, which is a core symptom affecting patients across many clinical conditions. METHODS: We identified and systematically reviewed empirical studies of MCID in acute pain. We searched PubMed, EMBASE and Cochrane Library, and included prospective studies determining MCID using a patient-reported anchor and a one-dimensional pain scale (e.g. 100 mm visual analogue scale). We summarised results and explored reasons for heterogeneity applying meta-regression, subgroup analyses and individual patient data meta-analyses. RESULTS: We included 37 studies (8479 patients). Thirty-five studies used a mean change approach, i.e. MCID was assessed as the mean difference in pain score among patients who reported a minimum degree of improvement, while seven studies used a threshold approach, i.e. MCID was assessed as the threshold in pain reduction associated with the best accuracy (sensitivity and specificity) for identifying improved patients. Meta-analyses found considerable heterogeneity between studies (absolute MCID: I(2) = 93%, relative MCID: I(2) = 75%) and results were therefore presented qualitatively, while analyses focused on exploring reasons for heterogeneity. The reported absolute MCID values ranged widely from 8 to 40 mm (standardised to a 100 mm scale) and the relative MCID values from 13% to 85%. From analyses of individual patient data (seven studies, 918 patients), we found baseline pain strongly associated with absolute, but not relative, MCID as patients with higher baseline pain needed larger pain reduction to perceive relief. Subgroup analyses showed that the definition of improved patients (one or several categories improvement or meaningful change) and the design of studies (single or multiple measurements) also influenced MCID values. CONCLUSIONS: The MCID in acute pain varied greatly between studies and was influenced by baseline pain, definitions of improved patients and study design. MCID is context-specific and potentially misguiding if determined, applied or interpreted inappropriately. Explicit and conscientious reflections on the choice of a reference value are required when using MCID to classify research results as clinically important or trivial

    STARD 2015: An Updated List of Essential Items for Reporting Diagnostic Accuracy Studies.

    Get PDF
    Incomplete reporting has been identified as a major source of avoidable waste in biomedical research. Essential information is often not provided in study reports, impeding the identification, critical appraisal, and replication of studies. To improve the quality of reporting of diagnostic accuracy studies, the Standards for Reporting of Diagnostic Accuracy Studies (STARD) statement was developed. Here we present STARD 2015, an updated list of 30 essential items that should be included in every report of a diagnostic accuracy study. This update incorporates recent evidence about sources of bias and variability in diagnostic accuracy and is intended to facilitate the use of STARD. As such, STARD 2015 may help to improve completeness and transparency in reporting of diagnostic accuracy studies
    corecore