791 research outputs found

    Bibliometrics of systematic reviews : analysis of citation rates and journal impact factors

    Get PDF
    Background: Systematic reviews are important for informing clinical practice and health policy. The aim of this study was to examine the bibliometrics of systematic reviews and to determine the amount of variance in citations predicted by the journal impact factor (JIF) alone and combined with several other characteristics. Methods: We conducted a bibliometric analysis of 1,261 systematic reviews published in 2008 and the citations to them in the Scopus database from 2008 to June 2012. Potential predictors of the citation impact of the reviews were examined using descriptive, univariate and multiple regression analysis. Results: The mean number of citations per review over four years was 26.5 (SD +/-29.9) or 6.6 citations per review per year. The mean JIF of the journals in which the reviews were published was 4.3 (SD +/-4.2). We found that 17% of the reviews accounted for 50% of the total citations and 1.6% of the reviews were not cited. The number of authors was correlated with the number of citations (r = 0.215, P =5.16) received citations in the bottom quartile (eight or fewer), whereas 9% of reviews published in the lowest JIF quartile (<=2.06) received citations in the top quartile (34 or more). Six percent of reviews in journals with no JIF were also in the first quartile of citations. Conclusions: The JIF predicted over half of the variation in citations to the systematic reviews. However, the distribution of citations was markedly skewed. Some reviews in journals with low JIFs were well-cited and others in higher JIF journals received relatively few citations; hence the JIF did not accurately represent the number of citations to individual systematic reviews

    A review of RCTs in four medical journals to assess the use of imputation to overcome missing data in quality of life outcomes

    Get PDF
    Background: Randomised controlled trials (RCTs) are perceived as the gold-standard method for evaluating healthcare interventions, and increasingly include quality of life (QoL) measures. The observed results are susceptible to bias if a substantial proportion of outcome data are missing. The review aimed to determine whether imputation was used to deal with missing QoL outcomes. Methods: A random selection of 285 RCTs published during 2005/6 in the British Medical Journal, Lancet, New England Journal of Medicine and Journal of American Medical Association were identified. Results: QoL outcomes were reported in 61 (21%) trials. Six (10%) reported having no missing data, 20 (33%) reported ≤ 10% missing, eleven (18%) 11%–20% missing, and eleven (18%) reported >20% missing. Missingness was unclear in 13 (21%). Missing data were imputed in 19 (31%) of the 61 trials. Imputation was part of the primary analysis in 13 trials, but a sensitivity analysis in six. Last value carried forward was used in 12 trials and multiple imputation in two. Following imputation, the most common analysis method was analysis of covariance (10 trials). Conclusion: The majority of studies did not impute missing data and carried out a complete-case analysis. For those studies that did impute missing data, researchers tended to prefer simpler methods of imputation, despite more sophisticated methods being available.The Health Services Research Unit is funded by the Chief Scientist Office of the Scottish Government Health Directorate. Shona Fielding is also currently funded by the Chief Scientist Office on a Research Training Fellowship (CZF/1/31)

    Do Interventions Designed to Support Shared Decision-Making Reduce Health Inequalities? : A Systematic Review and Meta-Analysis

    Get PDF
    Copyright: © 2014 Durand et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.Background: Increasing patient engagement in healthcare has become a health policy priority. However, there has been concern that promoting supported shared decision-making could increase health inequalities. Objective: To evaluate the impact of SDM interventions on disadvantaged groups and health inequalities. Design: Systematic review and meta-analysis of randomised controlled trials and observational studies.Peer reviewe

    Quality of reporting according to the CONSORT, STROBE and Timmer instrument at the American Burn Association (ABA) annual meetings 2000 and 2008

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The quality of oral and poster conference presentations differ. We hypothesized that the quality of reporting is better in oral abstracts than in poster abstracts at the American Burn Association (ABA) conference meeting.</p> <p>Methods</p> <p>All 511 abstracts (2000: N = 259, 2008: N = 252) from the ABA annual meetings in year 2000 and 2008 were screened. RCT's and obervational studies were analyzed by two independent examiners regarding study design and quality of reporting for randomized-controlled trials (RCT) by CONSORT criteria, observational studies by the STROBE criteria and additionally the Timmer instrument.</p> <p>Results</p> <p>Overall, 13 RCT's in 2000 and 9 in 2008, 77 observational studies in 2000 and 98 in 2008 were identified. Of the presented abstracts, 5% (oral; 7%(n = 9) vs. poster; 3%(n = 4)) in 2000 and 4% ((oral; 5%(n = 7) vs. poster; 2%(n = 2)) in 2008 were randomized controlled trials. The amount of observational studies as well as experimental studies accepted for presentation was not significantly different between oral and poster in both years. Reporting quality of RCT was for oral vs. poster abstracts in 2000 (CONSORT; 7.2 ± 0.8 vs. 7 ± 0, p = 0.615, CI -0.72 to 1.16, Timmer; 7.8 ± 0.7 vs. 7.5 ± 0.6,) and 2008 (CONSORT; 7.2 ± 1.4 vs. 6.5 ± 1, Timmer; 9.7 ± 1.1 vs. 9.5 ± 0.7). While in 2000, oral and poster abstracts of observational studies were not significantly different for reporting quality according to STROBE (STROBE; 8.3 ± 1.7 vs. 8.9 ± 1.6, p = 0.977, CI -37.3 to 36.3, Timmer; 8.6 ± 1.5 vs. 8.5 ± 1.4, p = 0.712, CI -0.44 to 0.64), in 2008 oral observational abstracts were significantly better than posters (STROBE score; 9.4 ± 1.9 vs. 8.5 ± 2, p = 0.005, CI 0.28 to 1.54, Timmer; 9.4 ± 1.4 vs. 8.6 ± 1.7, p = 0.013, CI 0.32 to 1.28).</p> <p>Conclusions</p> <p>Poster abstract reporting quality at the American Burn Association annual meetings in 2000 and 2008 is not necessarily inferior to oral abstracts as far as study design and reporting quality of clinical trials are concerned. The primary hypothesis has to be rejected. However, endorsement for the comprehensive use of the CONSORT and STROBE criteria might further increase the quality of reporting ABA conference abstracts in the future.</p

    Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews

    Get PDF
    BACKGROUND: Our objective was to develop an instrument to assess the methodological quality of systematic reviews, building upon previous tools, empirical evidence and expert consensus. METHODS: A 37-item assessment tool was formed by combining 1) the enhanced Overview Quality Assessment Questionnaire (OQAQ), 2) a checklist created by Sacks, and 3) three additional items recently judged to be of methodological importance. This tool was applied to 99 paper-based and 52 electronic systematic reviews. Exploratory factor analysis was used to identify underlying components. The results were considered by methodological experts using a nominal group technique aimed at item reduction and design of an assessment tool with face and content validity. RESULTS: The factor analysis identified 11 components. From each component, one item was selected by the nominal group. The resulting instrument was judged to have face and content validity. CONCLUSION: A measurement tool for the 'assessment of multiple systematic reviews' (AMSTAR) was developed. The tool consists of 11 items and has good face and content validity for measuring the methodological quality of systematic reviews. Additional studies are needed with a focus on the reproducibility and construct validity of AMSTAR, before strong recommendations can be made on its use

    Effects of the search technique on the measurement of the change in quality of randomized controlled trials over time in the field of brain injury

    Get PDF
    BACKGROUND: To determine if the search technique that is used to sample randomized controlled trial (RCT) manuscripts from a field of medical science can influence the measurement of the change in quality over time in that field. METHODS: RCT manuscripts in the field of brain injury were identified using two readily-available search techniques: (1) a PubMed MEDLINE search, and (2) the Cochrane Injuries Group (CIG) trials registry. Seven criteria of quality were assessed in each manuscript and related to the year-of-publication of the RCT manuscripts by regression analysis. RESULTS: No change in the frequency of reporting of any individual quality criterion was found in the sample of RCT manuscripts identified by the PubMed MEDLINE search. In the RCT manuscripts of the CIG trials registry, three of the seven criteria showed significant or near-significant increases over time. CONCLUSIONS: We demonstrated that measuring the change in quality over time of a sample of RCT manuscripts from the field of brain injury can be greatly affected by the search technique. This poorly recognized factor may make measurements of the change in RCT quality over time within a given field of medical science unreliable
    corecore