35 research outputs found

    Universality of citation distributions revisited

    Get PDF
    Radicchi, Fortunato, and Castellano [arXiv:0806.0974, PNAS 105(45), 17268] claim that, apart from a scaling factor, all fields of science are characterized by the same citation distribution. We present a large-scale validation study of this universality-of-citation-distributions claim. Our analysis shows that claiming citation distributions to be universal for all fields of science is not warranted. Although many fields indeed seem to have fairly similar citation distributions, there are quite some exceptions as well. We also briefly discuss the consequences of our findings for the measurement of scientific impact using citation-based bibliometric indicators

    Unraveling the dynamics of growth, aging and inflation for citations to scientific articles from specific research fields

    Full text link
    We analyze the time evolution of citations acquired by articles from journals of the American Physical Society (PRA, PRB, PRC, PRD, PRE and PRL). The observed change over time in the number of papers published in each journal is considered an exogenously caused variation in citability that is accounted for by a normalization. The appropriately inflation-adjusted citation rates are found to be separable into a preferential-attachment-type growth kernel and a purely obsolescence-related (i.e., monotonously decreasing as a function of time since publication) aging function. Variations in the empirically extracted parameters of the growth kernels and aging functions associated with different journals point to research-field-specific characteristics of citation intensity and knowledge flow. Comparison with analogous results for the citation dynamics of technology-disaggregated cohorts of patents provides deeper insight into the basic principles of information propagation as indicated by citing behavior.Comment: 13 pages, 6 figures, Elsevier style, v2: revised version to appear in J. Informetric

    Skewness distribution of four key altmetric indicators: an in-progress analysis across 22 fields in a national context

    Get PDF
    First of all, it should be mentioned that although this study uses a large sample of scientific publications (a total of 237,232), they are all Spanish publications. Therefore, the results may not be extrapolated. Recent studies show that altmetric indicators may have different patterns depending on the country. Countries such as Spain, France and Germany may have different altmetric patterns to Anglo-Saxon countries. However, various findings have been demonstrated that can be extrapolated to all contexts regardless of their specific features. Each altmetric indicator has its own pattern of asymmetry and is not the same in all scientific areas. The values are very different depending on the area and the indicator. Another important aspect pointed out by the study is that, compared to citations, the distributions of altmetric indicators are always less skewed and less pronounced. It also seems that citations are more similar to Twitter mentions. This paper is useful to provide a general mapping by indicator and area of the phenomenon of asymmetry in the world of altmetrics. It may be of use when establishing the field validity of certain indicators or when using statistical indicators such as averages. It will also help to decide whether it is necessary to introduce standardisation procedures for indicators such as those used by Costas and Bornmam and Leydesdorff. This work will be continued in the future using the complete Altmetric.com database and introducing a larger number of altmetric indicators

    Modelling Citation Networks

    Full text link
    The distribution of the number of academic publications as a function of citation count for a given year is remarkably similar from year to year. We measure this similarity as a width of the distribution and find it to be approximately constant from year to year. We show that simple citation models fail to capture this behaviour. We then provide a simple three parameter citation network model using a mixture of local and global search processes which can reproduce the correct distribution over time. We use the citation network of papers from the hep-th section of arXiv to test our model. For this data, around 20% of citations use global information to reference recently published papers, while the remaining 80% are found using local searches. We note that this is consistent with other studies though our motivation is very different from previous work. Finally, we also find that the fluctuations in the size of an academic publication's bibliography is important for the model. This is not addressed in most models and needs further work.Comment: 29 pages, 22 figure

    The evaluation of citation distributions.

    Get PDF
    This paper reviews a number of recent contributions that demonstrate that a blend of welfare economics and statistical analysis is useful in the evaluation of the citations received by scientific papers in the periodical literature. The paper begins by clarifying the role of citation analysis in the evaluation of research. Next, a summary of results about the citation distributions’ basic features at different aggregation levels is offered. These results indicate that citation distributions share the same broad shape, are highly skewed, and are often crowned by a power law. In light of this evidence, a novel methodology for the evaluation of research units is illustrated by comparing the high- and low-citation impact achieved by the U.S., the European Union, and the rest of the world in 22 scientific fields. However, contrary to recent claims, it is shown that mean normalization at the sub-field level does not lead to a universal distribution. Nevertheless, among other topics subject to ongoing research, it appears that this lack of universality does not preclude sensible normalization procedures to compare the citation impact of articles in different scientific fields.

    The Impact Factor as a measuring tool of the prestige of the journals in research assessment in mathematics

    Full text link
    This is a pre-copyedited, author-produced PDF of an article accepted for publication in Research evaluation following peer review. The version of record Antonia Ferrer-Sapena, Enrique A. SĂĄnchez-PĂ©rez, Fernanda Peset, Luis-MillĂĄn GonzĂĄlez, Rafael Aleixandre-Benavent; The Impact Factor as a measuring tool of the prestige of the journals in research assessment in mathematics. Res Eval 2016; 25 (3): 306-314 is available online at: https://doi.org/10.1093/reseval/rvv041The (2-year) Impact Factor of Thomson-Reuters (IF) has become the fundamental tool for analysing the scientific production of academic researchers in a lot of countries. In this article we show that this index and the ordering criterion obtained by using it are highly unstable in the case of mathematics, to the extent that sometimes no reliability can be assigned to its use. We explain the reasons of this behaviour by the specific properties of the mathematical journals and publications, attending mainly the point of view of the researchers in pure mathematics. Using the Journal Citation Report list of journals as a source of information, we analyse the stability in the position of the mathematical journals-the so-called rank-normalized impact factor-compared with journals in applied physics and microbiology during the period 2002-12. Due to the lack of stability of the position of the journals of mathematics in these lists, we propose a 'cumulative index' that fits better the characteristics of mathematical journals. The computation of this index uses the values of the IF of the journals in previous years, providing in this way a more stable indicator.This work was supported by the Ministerio de Economia y Competitividad (Spain) [CS02012-39632-C02 to A.F.S, F.P., R.A.B.] and [MTM2012-36740-C02-02 to E.A.S.P.].Ferrer Sapena, A.; SĂĄnchez PĂ©rez, EA.; Peset Mancebo, MF.; Gonzalez, L.; Aleixandre-Benavent, R. (2016). The Impact Factor as a measuring tool of the prestige of the journals in research assessment in mathematics. Research Evaluation. 25(3):306-314. https://doi.org/10.1093/reseval/rvv041S30631425
    corecore