966,875 research outputs found

    Bias in the journal impact factor

    Full text link
    The ISI journal impact factor (JIF) is based on a sample that may represent half the whole-of-life citations to some journals, but a small fraction (<10%) of the citations accruing to other journals. This disproportionate sampling means that the JIF provides a misleading indication of the true impact of journals, biased in favour of journals that have a rapid rather than a prolonged impact. Many journals exhibit a consistent pattern of citation accrual from year to year, so it may be possible to adjust the JIF to provide a more reliable indication of a journal's impact.Comment: 9 pages, 8 figures; one reference correcte

    McCall's Area Transformation versus the Integrated Impact Indicator (I3)

    Full text link
    In a study entitled "Skewed Citation Distributions and Bias Factors: Solutions to two core problems with the journal impact factor," Mutz & Daniel (2012) propose (i) McCall's (1922) Area Transformation of the skewed citation distribution so that this data can be considered as normally distributed (Krus & Kennedy, 1977), and (ii) to control for different document types as a co-variate (Rubin, 1977). This approach provides an alternative to Leydesdorff & Bornmann's (2011) Integrated Impact Indicator (I3). As the authors note, the two approaches are akin. Can something be said about the relative quality of the two approaches? To that end, I replicated the study of Mutz & Daniel for the 11 journals in the Subject Category "mathematical psychology," but using additionally I3 on the basis of continuous quantiles (Leydesdorff & Bornmann, in press) and its variant PR6 based on the six percentile rank classes distinguished by Bornmann & Mutz (2011) as follows: the top-1%, 95-99%, 90-95%, 75-90%, 50-75%, and bottom-50%.Comment: Letter to the Editor of the Journal of Informetrics in reaction to: Mutz, R., & Daniel, H.-D. (2012). Skewed Citation Distributions and Bias Factors: Solutions to two core problems with the journal impact factor. Journal of Informetrics 6(2), 169-17

    Associated factors and consequences of risk of bias in randomized controlled trials of yoga: A systematic review

    Full text link
    © 2015 Cramer et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Background: Bias in randomized controlled trials (RCTs) of complementary therapy interventions seems to be associated with specific factors and to potentially distort the studies' conclusions. This systematic review assessed associated factors of risk of bias and consequences for the studies' conclusions in RCTs of yoga as one of the most commonly used complementary therapies. Methods: Medline/PubMed, Scopus, IndMED and the Cochrane Library were searched through February 2014 for yoga RCTs. Risk of selection bias was assessed using the Cochrane tool and regressed to a) publication year; b) country of origin; c) journal type; and d) impact factor using multiple logistic regression analysis. Likewise, the authors' conclusions were regressed to risk of bias. Results: A total of 312 RCTs were included. Impact factor ranged from 0.0 to 39.2 (median = 1.3); 60 RCT (19.2%) had a low risk of selection bias, and 252 (80.8%) had a high or unclear risk of selection bias. Only publication year and impact factor significantly predicted low risk of bias; RCTs published after 2001 (adjusted odds ratio (OR) = 12.6; 95% confidence interval (CI) = 1.7, 94.0; p<0.001) and those published in journals with impact factor (adjusted OR = 2.6; 95%CI = 1.4, 4.9; p = 0.004) were more likely to have low risk of bias. The authors' conclusions were not associated with risk of bias. Conclusions: Risk of selection bias was generally high in RCTs of yoga; although the situation has improved since the publication of the revised CONSORT statement 2001. Pre-CONSORT RCTs and those published in journals without impact factor should be handled with increased care; although risk of bias is unlikely to distort the RCTs' conclusions

    The assessment of science: the relative merits of post- publication review, the impact factor, and the number of citations

    Get PDF
    The assessment of scientific publications is an integral part of the scientific process. Here we investigate three methods of assessing the merit of a scientific paper: subjective post-publication peer review, the number of citations gained by a paper, and the impact factor of the journal in which the article was published. We investigate these methods using two datasets in which subjective post-publication assessments of scientific publications have been made by experts. We find that there are moderate, but statistically significant, correlations between assessor scores, when two assessors have rated the same paper, and between assessor score and the number of citations a paper accrues. However, we show that assessor score depends strongly on the journal in which the paper is published, and that assessors tend to over-rate papers published in journals with high impact factors. If we control for this bias, we find that the correlation between assessor scores and between assessor score and the number of citations is weak, suggesting that scientists have little ability to judge either the intrinsic merit of a paper or its likely impact. We also show that the number of citations a paper receives is an extremely error-prone measure of scientific merit. Finally, we argue that the impact factor is likely to be a poor measure of merit, since it depends on subjective assessment. We conclude that the three measures of scientific merit considered here are poor; in particular subjective assessments are an error-prone, biased, and expensive method by which to assess merit. We argue that the impact factor may be the most satisfactory of the methods we have considered, since it is a form of pre-publication review. However, we emphasise that it is likely to be a very error-prone measure of merit that is qualitative, not quantitative

    The methodological quality of 176,620 randomized controlled trials published between 1966 and 2018 reveals a positive trend but also an urgent need for improvement

    Get PDF
    Many randomized controlled trials (RCTs) are biased and difficult to reproduce due to methodological flaws and poor reporting. There is increasing attention for responsible research practices and implementation of reporting guidelines, but whether these efforts have improved the methodological quality of RCTs (e.g., lower risk of bias) is unknown. We, therefore, mapped risk-of-bias trends over time in RCT publications in relation to journal and author characteristics. Meta-information of 176,620 RCTs published between 1966 and 2018 was extracted. The risk-of-bias probability (random sequence generation, allocation concealment, blinding of patients/personnel, and blinding of outcome assessment) was assessed using a risk-of-bias machine learning tool. This tool was simultaneously validated using 63,327 human risk-of-bias assessments obtained from 17,394 RCTs evaluated in the Cochrane Database of Systematic Reviews (CDSR). Moreover, RCT registration and CONSORT Statement reporting were assessed using automated searches. Publication characteristics included the number of authors, journal impact factor (JIF), and medical discipline. The annual number of published RCTs substantially increased over 4 decades, accompanied by increases in authors (5.2 to 7.8) and institutions (2.9 to 4.8). The risk of bias remained present in most RCTs but decreased over time for allocation concealment (63% to 51%), random sequence generation (57% to 36%), and blinding of outcome assessment (58% to 52%). Trial registration (37% to 47%) and the use of the CONSORT Statement (1% to 20%) also rapidly increased. In journals with a higher impact factor (>10), the risk of bias was consistently lower with higher levels of RCT registration and the use of the CONSORT Statement. Automated risk-of-bias predictions had accuracies above 70% for allocation concealment (70.7%), random sequence generation (72.1%), and blinding of patients/personnel (79.8%), but not for blinding of outcome assessment (62.7%). In conclusion, the likelihood of bias in RCTs has generally decreased over the last decades. This optimistic trend may be driven by increased knowledge augmented by mandatory trial registration and more stringent reporting guidelines and journal requirements. Nevertheless, relatively high probabilities of bias remain, particularly in journals with lower impact factors. This emphasizes that further improvement of RCT registration, conduct, and reporting is still urgently needed

    Refining the H-Index

    Get PDF
    Braun and colleagues recently examined the utility of the h-index (the number h of papers, each of which is cited at least h times) for assessing the impact of journals, and drew attention to some differences between the top 21 journals ranked according to the h-index and the journal impact factor. Their 4-year window, however, is inadequate. Data from the Web of Science suggest that the h-index for journals increases more-or-less linearly with time until it plateaus at about the twice the cited half-life, so it may be possible to base comparisons on a standard window (e.g., 3 years to be comparable with the journal impact factor), standardized by multiplying by the cited half-life divided by the width of the window (e.g., 3 years). Such an adjustment to the top 21 journals in Braun's table would promote the Journal of the American Chemical Society several places (from rank 20 to rank 6, if no external candidates are considered) and demote Nature Medicine (because of its youth, it has a short cited half-life). The use of a standard interval, without regard for the publication frequency of the journal or the nature of the discipline, introduces bias into both the journal impact factor and the h-index when applied to journals

    Are Scientists Nearsighted Gamblers? The Misleading Nature of Impact Factors

    Get PDF
    Despite a “Cambrian ” explosion in the number of citation metrics used (Van Noorden, 2010), the impact factor (IF) of a journal remains a decisive factor of choice when publishing your ultimate research results and evaluating research productivity. Most other citation metrics correlate with the IF and there is little doubt that they reflect the overall impact of different journals. However, there is good reason to be more cautious about IF judgments. First, the distribution of the number of citations per paper (NCPP) within a journal is heavily skewed. A few highly cited papers often account for a significant amount of the total citation count of a journal (25 % of the papers in Nature account for 89 % of the IF “Not-so-deep impact, ” 2005) and a recent report highlighted that even a single article can dramatically bias the IF of a small journal (Dimitrov et al., 2010). The mean NCPP, as captured with the IF, should therefore never be used. A more appropriate measure is the median NCPP. Figure 1 (left) plots the median of the total NCPP against the mean, for three potential publication outlets for psychologists; Psychological Review (IF2009 = 9.1), Nature (IF2009 = 34.5)

    Assessment of publication bias and outcome reporting bias in systematic reviews of health services and delivery research:A meta-epidemiological study

    Get PDF
    Strategies to identify and mitigate publication bias and outcome reporting bias are frequently adopted in systematic reviews of clinical interventions but it is not clear how often these are applied in systematic reviews relating to quantitative health services and delivery research (HSDR). We examined whether these biases are mentioned and/or otherwise assessed in HSDR systematic reviews, and evaluated associating factors to inform future practice. We randomly selected 200 quantitative HSDR systematic reviews published in the English language from 2007-2017 from the Health Systems Evidence database (www.healthsystemsevidence.org). We extracted data on factors that may influence whether or not authors mention and/or assess publication bias or outcome reporting bias. We found that 43% (n = 85) of the reviews mentioned publication bias and 10% (n = 19) formally assessed it. Outcome reporting bias was mentioned and assessed in 17% (n = 34) of all the systematic reviews. Insufficient number of studies, heterogeneity and lack of pre-registered protocols were the most commonly reported impediments to assessing the biases. In multivariable logistic regression models, both mentioning and formal assessment of publication bias were associated with: inclusion of a meta-analysis; being a review of intervention rather than association studies; higher journal impact factor, and; reporting the use of systematic review guidelines. Assessment of outcome reporting bias was associated with: being an intervention review; authors reporting the use of Grading of Recommendations, Assessment, Development and Evaluations (GRADE), and; inclusion of only controlled trials. Publication bias and outcome reporting bias are infrequently assessed in HSDR systematic reviews. This may reflect the inherent heterogeneity of HSDR evidence and different methodological approaches to synthesising the evidence, lack of awareness of such biases, limits of current tools and lack of pre-registered study protocols for assessing such biases. Strategies to help raise awareness of the biases, and methods to minimise their occurrence and mitigate their impacts on HSDR systematic reviews, are needed

    The generalized propensity score methodology for estimating unbiased journal impact factors

    Get PDF
    The journal impact factor (JIF) proposed by Garfield in the year 1955 is one of the most commonly used and prominent citation-based indicators of the performance and significance of a scientific journal. The JIF is simple, reasonable, clearly defined, and comparable over time and, what is more, can be easily calculated from data provided by Thomson Reuters, but at the expense of serious technical and methodological flaws. The paper discusses one of the core problems: The JIF is affected by bias factors (e.g., document type) that have nothing to do with the prestige or quality of a journal. For solving this problem, we suggest using the generalized propensity score methodology based on the Rubin Causal Model. Citation data for papers of all journals in the ISI subject category "Microscopy” (Journal Citation Report) are used to illustrate the proposa

    Determinants of Citation in Epidemiological Studies on Phthalates:A Citation Analysis

    Get PDF
    Citing of previous publications is an important factor in knowledge development. Because of the great amount of publications available, only a selection of studies gets cited, for varying reasons. If the selection of citations is associated with study outcome this is called citation bias. We will study determinants of citation in a broader sense, including e.g. study design, journal impact factor or the funding source of the publication. As a case study we assess which factors drive citation in the human literature on phthalates, specifically the metabolite mono(2-ethylhexyl) phthalate (MEHP). A systematic literature search identified all relevant publications on human health effect of MEHP. Data on potential determinants of citation were extracted in duplo. Specialized software was used to create a citation network, including all potential citation pathways. Random effect logistic regression was used to assess whether these determinants influence the likelihood of citation. 112 Publications on MEHP were identified, with 5684 potential citation pathways of which 551 were actual citations. Reporting of a harmful point estimate, journal impact factor, authority of the author, a male corresponding author, research performed in North America and self-citation were positively associated with the likelihood of being cited. In the literature on MEHP, citation is mostly driven by a number of factors that are not related to study outcome. Although the identified determinants do not necessarily give strong indications of bias, it shows selective use of published literature for a variety of reasons
    corecore