207 research outputs found

    A reverse engineering approach to the suppression of citation biases reveals universal properties of citation distributions

    Get PDF
    The large amount of information contained in bibliographic databases has recently boosted the use of citations, and other indicators based on citation numbers, as tools for the quantitative assessment of scientific research. Citations counts are often interpreted as proxies for the scientific influence of papers, journals, scholars, and institutions. However, a rigorous and scientifically grounded methodology for a correct use of citation counts is still missing. In particular, cross-disciplinary comparisons in terms of raw citation counts systematically favors scientific disciplines with higher citation and publication rates. Here we perform an exhaustive study of the citation patterns of millions of papers, and derive a simple transformation of citation counts able to suppress the disproportionate citation counts among scientific domains. We find that the transformation is well described by a power-law function, and that the parameter values of the transformation are typical features of each scientific discipline. Universal properties of citation patterns descend therefore from the fact that citation distributions for papers in a specific field are all part of the same family of univariate distributions.Comment: 9 pages, 6 figures. Supporting information files available at http://filrad.homelinux.or

    Universality of Performance Indicators based on Citation and Reference Counts

    Full text link
    We find evidence for the universality of two relative bibliometric indicators of the quality of individual scientific publications taken from different data sets. One of these is a new index that considers both citation and reference counts. We demonstrate this universality for relatively well cited publications from a single institute, grouped by year of publication and by faculty or by department. We show similar behaviour in publications submitted to the arXiv e-print archive, grouped by year of submission and by sub-archive. We also find that for reasonably well cited papers this distribution is well fitted by a lognormal with a variance of around 1.3 which is consistent with the results of Radicchi, Fortunato, and Castellano (2008). Our work demonstrates that comparisons can be made between publications from different disciplines and publication dates, regardless of their citation count and without expensive access to the whole world-wide citation graph. Further, it shows that averages of the logarithm of such relative bibliometric indices deal with the issue of long tails and avoid the need for statistics based on lengthy ranking procedures.Comment: 15 pages, 14 figures, 11 pages of supplementary material. Submitted to Scientometric

    A comment to the paper by Waltman et al., Scientometrics, 87, 467ā€“481, 2011

    Get PDF
    In reaction to a previous critique (Opthof and Leydesdorff, J Informetr 4(3):423ā€“430, 2010), the Center for Science and Technology Studies (CWTS) in Leiden proposed to change their old ā€œcrownā€ indicator in citation analysis into a new one. Waltman (Scientometrics 87:467ā€“481, 2011a) argue that this change does not affect rankings at various aggregated levels. However, CWTS data is not publicly available for testing and criticism. Therefore, we comment by using previously published data of Van Raan (Scientometrics 67(3):491ā€“502, 2006) to address the pivotal issue of how the results of citation analysis correlate with the results of peer review. A quality parameter based on peer review was neither significantly correlated with the two parameters developed by the CWTS in the past citations per paper/mean journal citation score (CPP/JCSm) or CPP/FCSm (citations per paper/mean field citation score) nor with the more recently proposed h-index (Hirsch, Proc Natl Acad Sci USA 102(46):16569ā€“16572, 2005). Given the high correlations between the old and new ā€œcrownā€ indicators, one can expect that the lack of correlation with the peer-review based quality indicator applies equally to the newly developed ones

    Comparing research investment to United Kingdom institutions and published outputs for tuberculosis, HIV and malaria: A systematic analysis across 1997-2013

    Get PDF
    Background: The "Unfinished Agenda" of infectious diseases is of great importance to policymakers and research funding agencies that require ongoing research evidence on their effective management. Journal publications help effectively share and disseminate research results to inform policy and practice. We assess research investments to United Kingdom institutions in HIV, tuberculosis and malaria, and analyse these by numbers of publications and citations and by disease and type of science. Methods: Information on infection-related research investments awarded to United Kingdom institutions across 1997-2010 were sourced from funding agencies and individually categorised by disease and type of science. Publications were sourced from the Scopus database via keyword searches and filtered to include only publications relating to human disease and containing a United Kingdom-based first and/or last author. Data were matched by disease and type of science categories. Investment (United Kingdom pounds) and publications were compared to generate an 'investment per publication' metric; similarly, an 'investment per citation' metric was also developed as a measure of the usefulness of research. Results: Total research investment for all three diseases was Ā£1.4 billion, and was greatest for HIV (Ā£651.4 million), followed by malaria (Ā£518.7 million) and tuberculosis (Ā£239.1 million). There were 17,271 included publications, with 9,322 for HIV, 4,451 for malaria, and 3,498 for tuberculosis. HIV publications received the most citations (254,949), followed by malaria (148,559) and tuberculosis (100,244). According to UK pound per publication, tuberculosis (Ā£50,691) appeared the most productive for investment, compared to HIV (Ā£61,971) and malaria (Ā£94,483). By type of science, public health research was most productive for HIV (Ā£27,296) and tuberculosis (Ā£22,273), while phase I-III trials were most productive for malaria (Ā£60,491). According to UK pound per citation, tuberculosis (Ā£1,797) was the most productive area for investment, compared to HIV (Ā£2,265) and malaria (Ā£2,834). Public health research was the most productive type of science for HIV (Ā£2,265) and tuberculosis (Ā£1,797), whereas phase I-III trials were most productive for malaria (Ā£1,713). Conclusions: When comparing total publications and citations with research investment to United Kingdom institutions, tuberculosis research appears to perform best in terms of efficiency. There were more public health-related publications and citations for HIV and tuberculosis than other types of science. These findings demonstrate the diversity of research funding and outputs, and provide new evidence to inform research investment strategies for policymakers, funders, academic institutions, and healthcare organizations.Infectious Disease Research Networ

    Good practices for a literature survey are not followed by authors while preparing scientific manuscripts

    Full text link
    The number of citations received by authors in scientific journals has become a major parameter to assess individual researchers and the journals themselves through the impact factor. A fair assessment therefore requires that the criteria for selecting references in a given manuscript should be unbiased with respect to the authors or the journals cited. In this paper, we advocate that authors should follow two mandatory principles to select papers (later reflected in the list of references) while studying the literature for a given research: i) consider similarity of content with the topics investigated, lest very related work should be reproduced or ignored; ii) perform a systematic search over the network of citations including seminal or very related papers. We use formalisms of complex networks for two datasets of papers from the arXiv repository to show that neither of these two criteria is fulfilled in practice

    Towards a new crown indicator: an empirical analysis

    Get PDF
    We present an empirical comparison between two normalization mechanisms for citation-based indicators of research performance. These mechanisms aim to normalize citation counts for the field and the year in which a publication was published. One mechanism is applied in the current so-called crown indicator of our institute. The other mechanism is applied in the new crown indicator that our institute is currently exploring. We find that at high aggregation levels, such as at the level of large research institutions or at the level of countries, the differences between the two mechanisms are very small. At lower aggregation levels, such as at the level of research groups or at the level of journals, the differences between the two mechanisms are somewhat larger. We pay special attention to the way in which recent publications are handled. These publications typically have very low citation counts and should therefore be handled with special care

    Harmonic publication and citation counting: sharing authorship credit equitably ā€“ not equally, geometrically or arithmetically

    Get PDF
    Bibliometric counting methods need to be validated against perceived notions of authorship credit allocation, and standardized by rejecting methods with poor fit or questionable ethical implications. Harmonic counting meets these concerns by exhibiting a robust fit to previously published empirical data from medicine, psychology and chemistry, and by complying with three basic ethical criteria for the equitable sharing of authorship credit. Harmonic counting can also incorporate additional byline information about equal contribution, or the elevated status of a corresponding last author. By contrast, several previously proposed counting schemes from the bibliometric literature including arithmetic, geometric and fractional counting, do not fit the empirical data as well and do not consistently meet the ethical criteria. In conclusion, harmonic counting would seem to provide unrivalled accuracy, fairness and flexibility to the long overdue task of standardizing bibliometric allocation of publication and citation credit
    • ā€¦
    corecore