105 research outputs found

    Bias Due to Changes in Specified Outcomes during the Systematic Review Process

    Get PDF
    Background Adding, omitting or changing outcomes after a systematic review protocol is published can result in bias because it increases the potential for unacknowledged or post hoc revisions of the planned analyses. The main objective of this study was to look for discrepancies between primary outcomes listed in protocols and in the subsequent completed reviews published on the Cochrane Library. A secondary objective was to quantify the risk of bias in a set of meta-analyses where discrepancies between outcome specifications in protocols and reviews were found. Methods and Findings New reviews from three consecutive issues of the Cochrane Library were assessed. For each review, the primary outcome(s) listed in the review protocol and the review itself were identified and review authors were contacted to provide reasons for any discrepancies. Over a fifth (64/288, 22%) of protocol/review pairings were found to contain a discrepancy in at least one outcome measure, of which 48 (75%) were attributable to changes in the primary outcome measure. Where lead authors could recall a reason for the discrepancy in the primary outcome, there was found to be potential bias in nearly a third (8/28, 29%) of these reviews, with changes being made after knowledge of the results from individual trials. Only 4(6%) of the 64 reviews with an outcome discrepancy described the reason for the change in the review, with no acknowledgment of the change in any of the eight reviews containing potentially biased discrepancies. Outcomes that were promoted in the review were more likely to be significant than if there was no discrepancy (relative risk 1.66 95% CI (1.10, 2.49), p = 0.02). Conclusion In a review, making changes after seeing the results for included studies can lead to biased and misleading interpretation if the importance of the outcome (primary or secondary) is changed on the basis of those results. Our assessment showed that reasons for discrepancies with the protocol are not reported in the review, demonstrating an under-recognition of the problem. Complete transparency in the reporting of changes in outcome specification is vital; systematic reviewers should ensure that any legitimate changes to outcome specification are reported with reason in the review

    Consensus-based recommendations for investigating clinical heterogeneity in systematic reviews

    Full text link
    Abstract Background Critics of systematic reviews have argued that these studies often fail to inform clinical decision making because their results are far too general, that the data are sparse, such that findings cannot be applied to individual patients or for other decision making. While there is some consensus on methods for investigating statistical and methodological heterogeneity, little attention has been paid to clinical aspects of heterogeneity. Clinical heterogeneity, true effect heterogeneity, can be defined as variability among studies in the participants, the types or timing of outcome measurements, and the intervention characteristics. The objective of this project was to develop recommendations for investigating clinical heterogeneity in systematic reviews. Methods We used a modified Delphi technique with three phases: (1) pre-meeting item generation; (2) face-to-face consensus meeting in the form of a modified Delphi process; and (3) post-meeting feedback. We identified and invited potential participants with expertise in systematic review methodology, systematic review reporting, or statistical aspects of meta-analyses, or those who published papers on clinical heterogeneity. Results Between April and June of 2011, we conducted phone calls with participants. In June 2011 we held the face-to-face focus group meeting in Ann Arbor, Michigan. First, we agreed upon a definition of clinical heterogeneity: Variations in the treatment effect that are due to differences in clinically related characteristics. Next, we discussed and generated recommendations in the following 12 categories related to investigating clinical heterogeneity: the systematic review team, planning investigations, rationale for choice of variables, types of clinical variables, the role of statistical heterogeneity, the use of plotting and visual aids, dealing with outlier studies, the number of investigations or variables, the role of the best evidence synthesis, types of statistical methods, the interpretation of findings, and reporting. Conclusions Clinical heterogeneity is common in systematic reviews. Our recommendations can help guide systematic reviewers in conducting valid and reliable investigations of clinical heterogeneity. Findings of these investigations may allow for increased applicability of findings of systematic reviews to the management of individual patients.http://deepblue.lib.umich.edu/bitstream/2027.42/112777/1/12874_2012_Article_987.pd

    Five-year costs from a randomised comparison of bilateral and single internal thoracic artery grafts

    Get PDF
    Background: The use of bilateral internal thoracic arteries (BITA) for coronary artery bypass grafting (CABG) may improve survival compared with CABG using single internal thoracic arteries (SITA). We assessed the long-term costs of BITA compared with SITA. Methods: Between June 2004 and December 2007, 3102 patients from 28 hospitals in seven countries were randomised to CABG surgery using BITA (n=1548) or SITA (n=1554). Detailed resource use data were collected from the initial hospital episode and annually up to 5 years. The associated costs of this resource use were assessed from a UK perspective with 5 year totals calculated for each trial arm and pre-selected patient subgroups. Results: Total costs increased by approximately £1000 annually in each arm, with no significant annual difference between trial arms. Cumulative costs per patient at 5-year follow-up remained significantly higher in the BITA group (£18 629) compared with the SITA group (£17 480; mean cost difference £1149, 95% CI £330 to £1968, p=0.006) due to the higher costs of the initial procedure. There were no significant differences between the trial arms in the cost associated with healthcare contacts, medication use or serious adverse events. Conclusions: Higher index costs for BITA were still present at 5-year follow-up mainly driven by the higher initial cost with no subsequent difference emerging between 1 year and 5 years of follow-up. The overall cost-effectiveness of the two procedures, to be assessed at the primary endpoint of the 10-year follow-up, will depend on composite differences in costs and quality-adjusted survival

    Outcome reporting bias in trials: a methodological approach for assessment and adjustment in systematic reviews

    Get PDF
    Systematic reviews of clinical trials aim to include all relevant studies conducted on a particular topic and to provide an unbiased summary of their results, producing the best evidence about the benefits and harms of medical treatments. Relevant studies, however, may not provide the results for all measured outcomes or may selectively report only some of the analyses undertaken, leading to unnecessary waste in the production and reporting of research, and potentially biasing the conclusions to systematic reviews. In this article, Kirkham and colleagues provide a methodological approach, with an example of how to identify missing outcome data and how to assess and adjust for outcome reporting bias in systematic reviews

    Reporting randomised trials of social and psychological interventions: the CONSORT-SPI 2018 Extension

    Get PDF
    Background: Randomised controlled trials (RCTs) are used to evaluate social and psychological interventions and inform policy decisions about them. Accurate, complete, and transparent reports of social and psychological intervention RCTs are essential for understanding their design, conduct, results, and the implications of the findings. However, the reporting of RCTs of social and psychological interventions remains suboptimal. The CONSORT Statement has improved the reporting of RCTs in biomedicine. A similar high-quality guideline is needed for the behavioural and social sciences. Our objective was to develop an official extension of the Consolidated Standards of Reporting Trials 2010 Statement (CONSORT 2010) for reporting RCTs of social and psychological interventions: CONSORT-SPI 2018. Methods: We followed best practices in developing the reporting guideline extension. First, we conducted a systematic review of existing reporting guidelines. We then conducted an online Delphi process including 384 international participants. In March 2014, we held a 3-day consensus meeting of 31 experts to determine the content of a checklist specifically targeting social and psychological intervention RCTs. Experts discussed previous research and methodological issues of particular relevance to social and psychological intervention RCTs. They then voted on proposed modifications or extensions of items from CONSORT 2010. Results: The CONSORT-SPI 2018 checklist extends 9 of the 25 items from CONSORT 2010: background and objectives, trial design, participants, interventions, statistical methods, participant flow, baseline data, outcomes and estimation, and funding. In addition, participants added a new item related to stakeholder involvement, and they modified aspects of the flow diagram related to participant recruitment and retention. Conclusions: Authors should use CONSORT-SPI 2018 to improve reporting of their social and psychological intervention RCTs. Journals should revise editorial policies and procedures to require use of reporting guidelines by authors and peer reviewers to produce manuscripts that allow readers to appraise study quality, evaluate the applicability of findings to their contexts, and replicate effective interventions

    A systematic review of the use of an expertise-based randomised controlled trial design

    Get PDF
    Acknowledgements JAC held a Medical Research Council UK methodology (G1002292) fellowship, which supported this research. The Health Services Research Unit, Institute of Applied Health Sciences (University of Aberdeen), is core-funded by the Chief Scientist Office of the Scottish Government Health and Social Care Directorates. Views express are those of the authors and do not necessarily reflect the views of the funders.Peer reviewedPublisher PD
    corecore