7,562 research outputs found

    A categorization of arguments for counting methods for publication and citation indicators

    Get PDF
    Most publication and citation indicators are based on datasets with multi-authored publications and thus a change in counting method will often change the value of an indicator. Therefore it is important to know why a specific counting method has been applied. I have identified arguments for counting methods in a sample of 32 bibliometric studies published in 2016 and compared the result with discussions of arguments for counting methods in three older studies. Based on the underlying logics of the arguments I have arranged the arguments in four groups. Group 1 focuses on arguments related to what an indicator measures, Group 2 on the additivity of a counting method, Group 3 on pragmatic reasons for the choice of counting method, and Group 4 on an indicator's influence on the research community or how it is perceived by researchers. This categorization can be used to describe and discuss how bibliometric studies with publication and citation indicators argue for counting methods

    The evolution of a citation network topology: The development of the journal Scientometrics

    Get PDF
    By mapping the electronic database containing all papers in Scientometrics for a 26-year period (1978-2004), we uncover the topological measures that characterize the network at a given moment, as well as the time evolution of these quantities. The citation network of the journal displays the characteristic features of a “small-world” network of local dense clusters of highly specialized literature. These clusters, however, are efficiently connected into a large single component by a small number of “hub” papers that allow short-distance connection among increasingly large numbers of papers. The patterns of evolution of the network toward this “small-world” are also explored

    Citations: Indicators of Quality? The Impact Fallacy

    Get PDF
    We argue that citation is a composed indicator: short-term citations can be considered as currency at the research front, whereas long-term citations can contribute to the codification of knowledge claims into concept symbols. Knowledge claims at the research front are more likely to be transitory and are therefore problematic as indicators of quality. Citation impact studies focus on short-term citation, and therefore tend to measure not epistemic quality, but involvement in current discourses in which contributions are positioned by referencing. We explore this argument using three case studies: (1) citations of the journal Soziale Welt as an example of a venue that tends not to publish papers at a research front, unlike, for example, JACS; (2) Robert Merton as a concept symbol across theories of citation; and (3) the Multi-RPYS ("Multi-Referenced Publication Year Spectroscopy") of the journals Scientometrics, Gene, and Soziale Welt. We show empirically that the measurement of "quality" in terms of citations can further be qualified: short-term citation currency at the research front can be distinguished from longer-term processes of incorporation and codification of knowledge claims into bodies of knowledge. The recently introduced Multi-RPYS can be used to distinguish between short-term and long-term impacts.Comment: accepted for publication in Frontiers in Research Metrics and Analysis; doi: 10.3389/frma.2016.0000

    Global Research Report – South and East Asia

    Get PDF
    Global Research Report – South and East Asia by Jonathan Adams, David Pendlebury, Gordon Rogers & Martin Szomszor. Published by Institute for Scientific Information, Web of Science Group

    What does Hirsch index evolution explain us? A case study: Turkish Journal of Chemistry

    Full text link
    The evolution of Turkish Journal of Chemistry (Turk J. Chem) Hirsch index (h-index) over the period 1995-2005 is studied and determined in the case of the self and without self-citations. It is seen that the effect of Hirsch index of Turk J. Chem has a highly positive trend during the last five years. It proves that Turk J. Chem is improving itself both in quantity and quality since h-index reflects peer review, and peer review reflects research quality of a journal.Comment: 5 pages, 3 figure

    Introducing CitedReferencesExplorer (CRExplorer): A program for Reference Publication Year Spectroscopy with Cited References Standardization

    Full text link
    We introduce a new tool - the CitedReferencesExplorer (CRExplorer, www.crexplorer.net) - which can be used to disambiguate and analyze the cited references (CRs) of a publication set downloaded from the Web of Science (WoS). The tool is especially suitable to identify those publications which have been frequently cited by the researchers in a field and thereby to study for example the historical roots of a research field or topic. CRExplorer simplifies the identification of key publications by enabling the user to work with both a graph for identifying most frequently cited reference publication years (RPYs) and the list of references for the RPYs which have been most frequently cited. A further focus of the program is on the standardization of CRs. It is a serious problem in bibliometrics that there are several variants of the same CR in the WoS. In this study, CRExplorer is used to study the CRs of all papers published in the Journal of Informetrics. The analyses focus on the most important papers published between 1980 and 1990.Comment: Accepted for publication in the Journal of Informetric

    Information Metrics (iMetrics): A Research Specialty with a Socio-Cognitive Identity?

    Full text link
    "Bibliometrics", "scientometrics", "informetrics", and "webometrics" can all be considered as manifestations of a single research area with similar objectives and methods, which we call "information metrics" or iMetrics. This study explores the cognitive and social distinctness of iMetrics with respect to the general information science (IS), focusing on a core of researchers, shared vocabulary and literature/knowledge base. Our analysis investigates the similarities and differences between four document sets. The document sets are drawn from three core journals for iMetrics research (Scientometrics, Journal of the American Society for Information Science and Technology, and Journal of Informetrics). We split JASIST into document sets containing iMetrics and general IS articles. The volume of publications in this representation of the specialty has increased rapidly during the last decade. A core of researchers that predominantly focus on iMetrics topics can thus be identified. This core group has developed a shared vocabulary as exhibited in high similarity of title words and one that shares a knowledge base. The research front of this field moves faster than the research front of information science in general, bringing it closer to Price's dream.Comment: Accepted for publication in Scientometric

    Bibliometric studies on single journals: a review

    Get PDF
    This paper covers a total of 82 bibliometric studies on single journals (62 studies cover unique titles) published between 1998 and 2008 grouped into the following fields; Arts, Humanities and Social Sciences (12 items); Medical and Health Sciences (19 items); Sciences and Technology (30 items) and Library and Information Sciences (21 items). Under each field the studies are described in accordance to their geographical location in the following order, United Kingdom, United States and Americana, Europe, Asia (India, Africa and Malaysia). For each study, elements described are (a) the journal’s publication characteristics and indexation information; (b) the objectives; (c) the sampling and bibliometric measures used; and (d) the results observed. A list of journal titles studied is appended. The results show that (a)bibliometric studies cover journals in various fields; (b) there are several revisits of some journals which are considered important; (c) Asian and African contributions is high (41.4 of total studies; 43.5 covering unique titles), United States (30.4 of total; 31.0 on unique titles), Europe (18.2 of total and 14.5 on unique titles) and the United Kingdom (10 of total and 11 on unique titles); (d) a high number of bibliometrists are Indians and as such coverage of Indian journals is high (28 of total studies; 30.6 of unique titles); and (e) the quality of the journals and their importance either nationally or internationally are inferred from their indexation status

    Do altmetrics correlate with the quality of papers? A large-scale empirical study based on F1000Prime data

    Full text link
    In this study, we address the question whether (and to what extent, respectively) altmetrics are related to the scientific quality of papers (as measured by peer assessments). Only a few studies have previously investigated the relationship between altmetrics and assessments by peers. In the first step, we analyse the underlying dimensions of measurement for traditional metrics (citation counts) and altmetrics - by using principal component analysis (PCA) and factor analysis (FA). In the second step, we test the relationship between the dimensions and quality of papers (as measured by the post-publication peer-review system of F1000Prime assessments) - using regression analysis. The results of the PCA and FA show that altmetrics operate along different dimensions, whereas Mendeley counts are related to citation counts, and tweets form a separate dimension. The results of the regression analysis indicate that citation-based metrics and readership counts are significantly more related to quality, than tweets. This result on the one hand questions the use of Twitter counts for research evaluation purposes and on the other hand indicates potential use of Mendeley reader counts
    corecore