5,395 research outputs found

    The weakening relationship between the Impact Factor and papers' citations in the digital age

    Full text link
    Historically, papers have been physically bound to the journal in which they were published but in the electronic age papers are available individually, no longer tied to their respective journals. Hence, papers now can be read and cited based on their own merits, independently of the journal's physical availability, reputation, or Impact Factor. We compare the strength of the relationship between journals' Impact Factors and the actual citations received by their respective papers from 1902 to 2009. Throughout most of the 20th century, papers' citation rates were increasingly linked to their respective journals' Impact Factors. However, since 1990, the advent of the digital age, the strength of the relation between Impact Factors and paper citations has been decreasing. This decrease began sooner in physics, a field that was quicker to make the transition into the electronic domain. Furthermore, since 1990, the proportion of highly cited papers coming from highly cited journals has been decreasing, and accordingly, the proportion of highly cited papers not coming from highly cited journals has also been increasing. Should this pattern continue, it might bring an end to the use of the Impact Factor as a way to evaluate the quality of journals, papers and researchers.Comment: 14 pages, 5 figure

    Citation Counts and Evaluation of Researchers in the Internet Age

    Full text link
    Bibliometric measures derived from citation counts are increasingly being used as a research evaluation tool. Their strengths and weaknesses have been widely analyzed in the literature and are often subject of vigorous debate. We believe there are a few fundamental issues related to the impact of the web that are not taken into account with the importance they deserve. We focus on evaluation of researchers, but several of our arguments may be applied also to evaluation of research institutions as well as of journals and conferences.Comment: 4 pages, 2 figures, 3 table

    Are methodological quality and completeness of reporting associated with citation-based measures of publication impact? A secondary analysis of a systematic review of dementia biomarker studies

    Get PDF
    Objective: To determine whether methodological and reporting quality are associated with surrogate measures of publication impact in the field of dementia biomarker studies. Methods: We assessed dementia biomarker studies included in a previous systematic review in terms of methodological and reporting quality using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) and Standards for Reporting of Diagnostic Accuracy (STARD), respectively. We extracted additional study and journal-related data from each publication to account for factors shown to be associated with impact in previous research. We explored associations between potential determinants and measures of publication impact in univariable and stepwise multivariable linear regression analyses. Outcome measures: We aimed to collect data on four measures of publication impact: two traditional measures—average number of citations per year and 5-year impact factor of the publishing journal and two alternative measures—the Altmetric Attention Score and counts of electronic downloads. Results: The systematic review included 142 studies. Due to limited data, Altmetric Attention Scores and electronic downloads were excluded from the analysis, leaving traditional metrics as the only analysed outcome measures. We found no relationship between QUADAS and traditional metrics. Citation rates were independently associated with 5-year journal impact factor (β=0.42; p<0.001), journal subject area (β=0.39; p<0.001), number of years since publication (β=-0.29; p<0.001) and STARD (β=0.13; p<0.05). Independent determinants of 5-year journal impact factor were citation rates (β=0.45; p<0.001), statement on conflict of interest (β=0.22; p<0.01) and baseline sample size (β=0.15; p<0.05). Conclusions: Citation rates and 5-year journal impact factor appear to measure different dimensions of impact. Citation rates were weakly associated with completeness of reporting, while neither traditional metric was related to methodological rigour. Our results suggest that high publication usage and journal outlet is not a guarantee of quality and readers should critically appraise all papers regardless of presumed impact

    Bibliometric Indicators of Young Authors in Astrophysics: Can Later Stars be Predicted?

    Full text link
    We test 16 bibliometric indicators with respect to their validity at the level of the individual researcher by estimating their power to predict later successful researchers. We compare the indicators of a sample of astrophysics researchers who later co-authored highly cited papers before their first landmark paper with the distributions of these indicators over a random control group of young authors in astronomy and astrophysics. We find that field and citation-window normalisation substantially improves the predicting power of citation indicators. The two indicators of total influence based on citation numbers normalised with expected citation numbers are the only indicators which show differences between later stars and random authors significant on a 1% level. Indicators of paper output are not very useful to predict later stars. The famous hh-index makes no difference at all between later stars and the random control group.Comment: 14 pages, 10 figure

    The influence of online posting dates on the bibliometric indicators of scientific articles

    Get PDF
    This article analyses the difference in timing between the online availability of articles and their corresponding print publication and how it affects two bibliometric indicators: Journal Impact Factor (JIF) and Immediacy Index. This research examined 18,526 articles, the complete collection of articles and reviews published by a set of 61 journals on Urology and Nephrology in 2013 and 2014. The findings suggest that Advance Online Publication (AOP) accelerates the citation of articles and affects the JIF and Immediacy Index values. Regarding the JIF values, the comparison between journals with or without AOP showed statistically significant differences (P=0.001, Mann-Whitney U test). The Spearman's correlation between the JIF and the median online-to-print publication delay was not statistically significant. As to the Immediacy Index, a significant Spearman's correlation (rs=0.280, P=0.029) was found regarding the median online-to-print publication delays for journals published in 2014, although no statistically significant correlation was found for those published in 2013. Most journals examined (n=52 out of 61) published their articles in AOP. The analysis also showed different publisher practices: eight journals did not include the online posting dates in the full-text and nine journals published articles showing two different online posting dates--the date provided on the journal website and another provided by Elsevier's Science Direct. These practices suggest the need for transparency and standardization of the AOP dates of scientific articles for calculating bibliometric indicators for journals

    Fostering Bibliodiversity in Scholarly Communications: A Call for Action!

    Get PDF
    Diversity is an important characteristic of any healthy ecosystem, including scholarly communications. Diversity in services and platforms, funding mechanisms, and evaluation measures will allow the scholarly communication system to accommodate the different workflows, languages, publication outputs, and research topics that support the needs and epistemic pluralism of different research communities. In addition, diversity reduces the risk of vendor lock-in, which inevitably leads to monopoly, monoculture, and high prices. Bibliodiversity has been in steady decline for decades.1 Far from promoting diversity, the dominant “ecosystem” of scholarly publishing today increasingly resembles what Vandana Shiva (1993) has called the “monocultures of the mind”2, characterized by the homogenization of publication formats and outlets that are largely owned by a small number of multinational publishers who are far more interested in profit maximization than the health of the system. Yet, a diverse scholarly communications system is essential for addressing the complex challenges we face. As we transition to open access and open science, there is an opportunity to reverse this decline and foster greater diversity in scholarly communications; what the Jussieu Call refers to as bibliodiversity3. Bibliodiversity, by its nature, cannot be pursued through a single, unified approach, however it does require strong coordination in order to avoid a fragmented and siloed ecosystem. Building on the principles outlined in the Jussieu Call, this paper explores the current state of diversity in scholarly communications, and issues a call for action, specifying what each community can do individually and collectively to support greater bibliodiversity in a more intentional fashion

    Animal versus human research reporting guidelines impacts: literature analysis reveals citation count bias

    Get PDF
    The present study evaluated for the first time citation-impacts of human research reporting guidelines in comparison to their animal version counterparts. Re-examined and extended also were previous findings indicating that a research reporting guideline would be cited more for its versions published in journals with higher Impact Factors, compared to its duplicate versions published in journals with lower Impact Factors. The two top-ranked reporting guidelines listed in the Equator Network website (http://www.equator-network.org/) were CONSORT 2010, for parallel-group randomized trials; and STROBE, for observational studies. These two guidelines had animal study versions, REFLECT and STROBE-Vet, respectively. Together with ARRIVE, these five guidelines were subsequently searched in the Web of Science Core Collection online database to record their journal metrics and citation data. Results found that association between citation rates and journal Impact Factors existed for CONSORT guideline set for human studies, but not for STROBE or their counterparts set for animal studies. If Impact Factor was expressed in terms of journal rank percentile, no association was found except for CONSORT. Guidelines for human studies were much more cited than animal research guidelines, with the CONSORT 2010 and STROBE guidelines being cited 27.1 and 241.0 times more frequently than their animal version counterparts, respectively. In conclusion, while the journal Impact Factor is of importance, other important publishing features also strongly affect scientific manuscript visibility, represented by citation rate. More effort should be invested to improve the visibility of animal research guidelines.info:eu-repo/semantics/publishedVersio

    High-ranked social science journal articles can be identified from early citation information

    Get PDF
    Do citations accumulate too slowly in the social sciences to be used to assess the quality of recent articles? I investigate whether this is the case using citation data for all articles in economics and political science published in 2006 and indexed in the Web of Science. I find that citations in the first two years after publication explain more than half of the variation in cumulative citations received over a longer period. Journal impact factors improve the correlation between the predicted and actual future ranks of journal articles when using citation data from 2006 alone but the effect declines sharply thereafter. Finally, more than half of the papers in the top 20% in 2012 were already in the top 20% in the year of publication (2006)
    • …
    corecore