83 research outputs found

    The weakening relationship between the Impact Factor and papers' citations in the digital age

    Full text link
    Historically, papers have been physically bound to the journal in which they were published but in the electronic age papers are available individually, no longer tied to their respective journals. Hence, papers now can be read and cited based on their own merits, independently of the journal's physical availability, reputation, or Impact Factor. We compare the strength of the relationship between journals' Impact Factors and the actual citations received by their respective papers from 1902 to 2009. Throughout most of the 20th century, papers' citation rates were increasingly linked to their respective journals' Impact Factors. However, since 1990, the advent of the digital age, the strength of the relation between Impact Factors and paper citations has been decreasing. This decrease began sooner in physics, a field that was quicker to make the transition into the electronic domain. Furthermore, since 1990, the proportion of highly cited papers coming from highly cited journals has been decreasing, and accordingly, the proportion of highly cited papers not coming from highly cited journals has also been increasing. Should this pattern continue, it might bring an end to the use of the Impact Factor as a way to evaluate the quality of journals, papers and researchers.Comment: 14 pages, 5 figure

    Citation Counts and Evaluation of Researchers in the Internet Age

    Full text link
    Bibliometric measures derived from citation counts are increasingly being used as a research evaluation tool. Their strengths and weaknesses have been widely analyzed in the literature and are often subject of vigorous debate. We believe there are a few fundamental issues related to the impact of the web that are not taken into account with the importance they deserve. We focus on evaluation of researchers, but several of our arguments may be applied also to evaluation of research institutions as well as of journals and conferences.Comment: 4 pages, 2 figures, 3 table

    Fostering Bibliodiversity in Scholarly Communications: A Call for Action!

    Get PDF
    Diversity is an important characteristic of any healthy ecosystem, including scholarly communications. Diversity in services and platforms, funding mechanisms, and evaluation measures will allow the scholarly communication system to accommodate the different workflows, languages, publication outputs, and research topics that support the needs and epistemic pluralism of different research communities. In addition, diversity reduces the risk of vendor lock-in, which inevitably leads to monopoly, monoculture, and high prices. Bibliodiversity has been in steady decline for decades.1 Far from promoting diversity, the dominant “ecosystem” of scholarly publishing today increasingly resembles what Vandana Shiva (1993) has called the “monocultures of the mind”2, characterized by the homogenization of publication formats and outlets that are largely owned by a small number of multinational publishers who are far more interested in profit maximization than the health of the system. Yet, a diverse scholarly communications system is essential for addressing the complex challenges we face. As we transition to open access and open science, there is an opportunity to reverse this decline and foster greater diversity in scholarly communications; what the Jussieu Call refers to as bibliodiversity3. Bibliodiversity, by its nature, cannot be pursued through a single, unified approach, however it does require strong coordination in order to avoid a fragmented and siloed ecosystem. Building on the principles outlined in the Jussieu Call, this paper explores the current state of diversity in scholarly communications, and issues a call for action, specifying what each community can do individually and collectively to support greater bibliodiversity in a more intentional fashion

    Sozialwissenschaften

    Get PDF
    This text was published as a book chapter in the publication "Praxishandbuch Open Access" ("Open Access Handbook") edited by Konstanze Söllner and Bernhard Mittermaier. It reflects the current state of Open Access to text publications, data and software in the Social Sciences

    Are methodological quality and completeness of reporting associated with citation-based measures of publication impact? A secondary analysis of a systematic review of dementia biomarker studies

    Get PDF
    Objective: To determine whether methodological and reporting quality are associated with surrogate measures of publication impact in the field of dementia biomarker studies. Methods: We assessed dementia biomarker studies included in a previous systematic review in terms of methodological and reporting quality using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) and Standards for Reporting of Diagnostic Accuracy (STARD), respectively. We extracted additional study and journal-related data from each publication to account for factors shown to be associated with impact in previous research. We explored associations between potential determinants and measures of publication impact in univariable and stepwise multivariable linear regression analyses. Outcome measures: We aimed to collect data on four measures of publication impact: two traditional measures—average number of citations per year and 5-year impact factor of the publishing journal and two alternative measures—the Altmetric Attention Score and counts of electronic downloads. Results: The systematic review included 142 studies. Due to limited data, Altmetric Attention Scores and electronic downloads were excluded from the analysis, leaving traditional metrics as the only analysed outcome measures. We found no relationship between QUADAS and traditional metrics. Citation rates were independently associated with 5-year journal impact factor (β=0.42; p<0.001), journal subject area (β=0.39; p<0.001), number of years since publication (β=-0.29; p<0.001) and STARD (β=0.13; p<0.05). Independent determinants of 5-year journal impact factor were citation rates (β=0.45; p<0.001), statement on conflict of interest (β=0.22; p<0.01) and baseline sample size (β=0.15; p<0.05). Conclusions: Citation rates and 5-year journal impact factor appear to measure different dimensions of impact. Citation rates were weakly associated with completeness of reporting, while neither traditional metric was related to methodological rigour. Our results suggest that high publication usage and journal outlet is not a guarantee of quality and readers should critically appraise all papers regardless of presumed impact

    Bibliometric Indicators of Young Authors in Astrophysics: Can Later Stars be Predicted?

    Full text link
    We test 16 bibliometric indicators with respect to their validity at the level of the individual researcher by estimating their power to predict later successful researchers. We compare the indicators of a sample of astrophysics researchers who later co-authored highly cited papers before their first landmark paper with the distributions of these indicators over a random control group of young authors in astronomy and astrophysics. We find that field and citation-window normalisation substantially improves the predicting power of citation indicators. The two indicators of total influence based on citation numbers normalised with expected citation numbers are the only indicators which show differences between later stars and random authors significant on a 1% level. Indicators of paper output are not very useful to predict later stars. The famous hh-index makes no difference at all between later stars and the random control group.Comment: 14 pages, 10 figure
    • …
    corecore