7,585 research outputs found

    A categorization of arguments for counting methods for publication and citation indicators

    Get PDF
    Most publication and citation indicators are based on datasets with multi-authored publications and thus a change in counting method will often change the value of an indicator. Therefore it is important to know why a specific counting method has been applied. I have identified arguments for counting methods in a sample of 32 bibliometric studies published in 2016 and compared the result with discussions of arguments for counting methods in three older studies. Based on the underlying logics of the arguments I have arranged the arguments in four groups. Group 1 focuses on arguments related to what an indicator measures, Group 2 on the additivity of a counting method, Group 3 on pragmatic reasons for the choice of counting method, and Group 4 on an indicator's influence on the research community or how it is perceived by researchers. This categorization can be used to describe and discuss how bibliometric studies with publication and citation indicators argue for counting methods

    Usage History of Scientific Literature: Nature Metrics and Metrics of Nature Publications

    Get PDF
    In this study, we analyze the dynamic usage history of Nature publications over time using Nature metrics data. We conduct analysis from two perspectives. On the one hand, we examine how long it takes before the articles' downloads reach 50%/80% of the total; on the other hand, we compare the percentage of total downloads in 7 days, 30 days, and 100 days after publication. In general, papers are downloaded most frequently within a short time period right after their publication. And we find that compared with Non-Open Access papers, readers' attention on Open Access publications are more enduring. Based on the usage data of a newly published paper, regression analysis could predict the future expected total usage counts.Comment: 11 pages, 5 figures and 4 table

    Discovering Rehabilitation trends in Spain: A bibliometric analysis

    Get PDF
    The main purpose of this study is to offer an overview of the rehabilitation research area in Spain from 1970 to 2018 through a bibliometric analysis. Analysis of performance and a co-word science mapping analysis were conducted to highlight the topics covered. The software tool SciMAT was used to analyse the themes concerning their performance and impact measures. A total of 3,564 documents from the Web of Science were retrieved. Univ Deusto, Univ Rey Juan Carlos and Basque Foundation for Science are the institutions with highest relative priority. The most important research themes are IntellectualDisability, Neck-Pain and Pain

    Citations: Indicators of Quality? The Impact Fallacy

    Get PDF
    We argue that citation is a composed indicator: short-term citations can be considered as currency at the research front, whereas long-term citations can contribute to the codification of knowledge claims into concept symbols. Knowledge claims at the research front are more likely to be transitory and are therefore problematic as indicators of quality. Citation impact studies focus on short-term citation, and therefore tend to measure not epistemic quality, but involvement in current discourses in which contributions are positioned by referencing. We explore this argument using three case studies: (1) citations of the journal Soziale Welt as an example of a venue that tends not to publish papers at a research front, unlike, for example, JACS; (2) Robert Merton as a concept symbol across theories of citation; and (3) the Multi-RPYS ("Multi-Referenced Publication Year Spectroscopy") of the journals Scientometrics, Gene, and Soziale Welt. We show empirically that the measurement of "quality" in terms of citations can further be qualified: short-term citation currency at the research front can be distinguished from longer-term processes of incorporation and codification of knowledge claims into bodies of knowledge. The recently introduced Multi-RPYS can be used to distinguish between short-term and long-term impacts.Comment: accepted for publication in Frontiers in Research Metrics and Analysis; doi: 10.3389/frma.2016.0000

    National and International Dimensions of the Triple Helix in Japan: University-Industry-Government versus International Co-Authorship Relations

    Full text link
    International co-authorship relations and university-industry-government ("Triple Helix") relations have hitherto been studied separately. Using Japanese (ISI) publication data for the period 1981-2004, we were able to study both kinds of relations in a single design. In the Japanese file, 1,277,823 articles with at least one Japanese address were attributed to the three sectors, and we know additionally whether these papers were co-authored internationally. Using the mutual information in three and four dimensions, respectively, we show that the Japanese Triple-Helix system has continuously been eroded at the national level. However, since the middle of the 1990s, international co-authorship relations have contributed to a reduction of the uncertainty. In other words, the national publication system of Japan has developed a capacity to retain surplus value generated internationally. In a final section, we compare these results with an analysis based on similar data for Canada. A relative uncoupling of local university-industry relations because of international collaborations is indicated in both national systems

    Do altmetrics correlate with the quality of papers? A large-scale empirical study based on F1000Prime data

    Full text link
    In this study, we address the question whether (and to what extent, respectively) altmetrics are related to the scientific quality of papers (as measured by peer assessments). Only a few studies have previously investigated the relationship between altmetrics and assessments by peers. In the first step, we analyse the underlying dimensions of measurement for traditional metrics (citation counts) and altmetrics - by using principal component analysis (PCA) and factor analysis (FA). In the second step, we test the relationship between the dimensions and quality of papers (as measured by the post-publication peer-review system of F1000Prime assessments) - using regression analysis. The results of the PCA and FA show that altmetrics operate along different dimensions, whereas Mendeley counts are related to citation counts, and tweets form a separate dimension. The results of the regression analysis indicate that citation-based metrics and readership counts are significantly more related to quality, than tweets. This result on the one hand questions the use of Twitter counts for research evaluation purposes and on the other hand indicates potential use of Mendeley reader counts

    Impact Factor: outdated artefact or stepping-stone to journal certification?

    Full text link
    A review of Garfield's journal impact factor and its specific implementation as the Thomson Reuters Impact Factor reveals several weaknesses in this commonly-used indicator of journal standing. Key limitations include the mismatch between citing and cited documents, the deceptive display of three decimals that belies the real precision, and the absence of confidence intervals. These are minor issues that are easily amended and should be corrected, but more substantive improvements are needed. There are indications that the scientific community seeks and needs better certification of journal procedures to improve the quality of published science. Comprehensive certification of editorial and review procedures could help ensure adequate procedures to detect duplicate and fraudulent submissions.Comment: 25 pages, 12 figures, 6 table
    corecore