6,763 research outputs found

    The measurement of low- and high-impact in citation distributions: technical results

    Get PDF
    This paper introduces a novel methodology for comparing the citation distributions of research units working in the same homogeneous field. Given a critical citation level (CCL), we suggest using two real valued indicators to describe the shape of any distribution: a highimpact and a low-impact measure defined over the set of articles with citations above or below the CCL. The key to this methodology is the identification of a citation distribution with an income distribution. Once this step is taken, it is easy to realize that the measurement of lowimpact coincides with the measurement of economic poverty. In turn, it is equally natural to identify the measurement of high-impact with the measurement of a certain notion of economic affluence. On the other hand, it is seen that the ranking of citation distributions according to a family of low-impact measures, originally suggested by Foster et al. (1984) for the measurement of economic poverty, is essentially characterized by a number of desirable axioms. Appropriately redefined, these same axioms lead to the selection of an equally convenient class of decomposable high-impact measures. These two families are shown to satisfy other interesting properties that make them potentially useful in empirical applications, including the comparison of research units working in different fields.

    The evaluation of citation distributions.

    Get PDF
    This paper reviews a number of recent contributions that demonstrate that a blend of welfare economics and statistical analysis is useful in the evaluation of the citations received by scientific papers in the periodical literature. The paper begins by clarifying the role of citation analysis in the evaluation of research. Next, a summary of results about the citation distributions’ basic features at different aggregation levels is offered. These results indicate that citation distributions share the same broad shape, are highly skewed, and are often crowned by a power law. In light of this evidence, a novel methodology for the evaluation of research units is illustrated by comparing the high- and low-citation impact achieved by the U.S., the European Union, and the rest of the world in 22 scientific fields. However, contrary to recent claims, it is shown that mean normalization at the sub-field level does not lead to a universal distribution. Nevertheless, among other topics subject to ongoing research, it appears that this lack of universality does not preclude sensible normalization procedures to compare the citation impact of articles in different scientific fields.
    • 

    corecore