32,045 research outputs found

    Measuring co-authorship and networking-adjusted scientific impact

    Get PDF
    Appraisal of the scientific impact of researchers, teams and institutions with productivity and citation metrics has major repercussions. Funding and promotion of individuals and survival of teams and institutions depend on publications and citations. In this competitive environment, the number of authors per paper is increasing and apparently some co-authors don't satisfy authorship criteria. Listing of individual contributions is still sporadic and also open to manipulation. Metrics are needed to measure the networking intensity for a single scientist or group of scientists accounting for patterns of co-authorship. Here, I define I1 for a single scientist as the number of authors who appear in at least I1 papers of the specific scientist. For a group of scientists or institution, In is defined as the number of authors who appear in at least In papers that bear the affiliation of the group or institution. I1 depends on the number of papers authored Np. The power exponent R of the relationship between I1 and Np categorizes scientists as solitary (R>2.5), nuclear (R=2.25-2.5), networked (R=2-2.25), extensively networked (R=1.75-2) or collaborators (R<1.75). R may be used to adjust for co-authorship networking the citation impact of a scientist. In similarly provides a simple measure of the effective networking size to adjust the citation impact of groups or institutions. Empirical data are provided for single scientists and institutions for the proposed metrics. Cautious adoption of adjustments for co-authorship and networking in scientific appraisals may offer incentives for more accountable co-authorship behaviour in published articles.Comment: 25 pages, 5 figure

    Assessing scientific research performance and impact with single indices

    Get PDF
    We provide a comprehensive and critical review of the h-index and its most important modifications proposed in the literature, as well as of other similar indicators measuring research output and impact. Extensions of some of these indices are presented and illustrated.Citation metrics, Research output, h-index, Hirsch index, h-type indices

    Citation Statistics

    Full text link
    This is a report about the use and misuse of citation data in the assessment of scientific research. The idea that research assessment must be done using ``simple and objective'' methods is increasingly prevalent today. The ``simple and objective'' methods are broadly interpreted as bibliometrics, that is, citation data and the statistics derived from them. There is a belief that citation statistics are inherently more accurate because they substitute simple numbers for complex judgments, and hence overcome the possible subjectivity of peer review. But this belief is unfounded.Comment: This paper commented in: [arXiv:0910.3532], [arXiv:0910.3537], [arXiv:0910.3543], [arXiv:0910.3546]. Rejoinder in [arXiv:0910.3548]. Published in at http://dx.doi.org/10.1214/09-STS285 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Does Criticism Overcome the Praises of Journal Impact Factor?

    Get PDF
    Journal impact factor (IF) as a gauge of influence and impact of a particular journal comparing with other journals in the same area of research, reports the mean number of citations to the published articles in particular journal. Although, IF attracts more attention and being used more frequently than other measures, it has been subjected to criticisms, which overcome the advantages of IF. Critically, extensive use of IF may result in destroying editorial and researchers’ behaviour, which could compromise the quality of scientific articles. Therefore, it is the time of the timeliness and importance of a new invention of journal ranking techniques beyond the journal impact factor

    Co-author weighting in bibliometric methodology and subfields of a scientific discipline

    Full text link
    Collaborative work and co-authorship are fundamental to the advancement of modern science. However, it is not clear how collaboration should be measured in achievement-based metrics. Co-author weighted credit introduces distortions into the bibliometric description of a discipline. It puts great weight on collaboration - not based on the results of collaboration - but purely because of the existence of collaborations. In terms of publication and citation impact, it artificially favors some subdisciplines. In order to understand how credit is given in a co-author weighted system (like the NRC's method), we introduced credit spaces. We include a study of the discipline of physics to illustrate the method. Indicators are introduced to measure the proportion of a credit space awarded to a subfield or a set of authors.Comment: 11 pages, 1 figure, 4 table

    "Needless to Say My Proposal Was Turned Down": The Early Days of Commercial Citation Indexing, an "Error-making" Activity and Its Repercussions Till Today

    Get PDF
    In today’s neoliberal audit cultures university rankings, quantitative evaluation of publications by JIF or researchers by h-index are believed to be indispensable instruments for “quality assurance” in the sciences. Yet there is increasing resistance against “impactitis” and “evaluitis”. Usually overseen: Trivial errors in Thomson Reuters’ citation indexes produce severe non-trivial effects: Their victims are authors, institutions, journals with names beyond the ASCII-code and scholars of humanities and social sciences. Analysing the “Joshua Lederberg Papers” I want to illuminate eventually successful ‘invention’ of science citation indexing is a product of contingent factors. To overcome severe resistance Eugene Garfield, the “father” of citation indexing, had to foster overoptimistic attitudes and to downplay the severe problems connected to global and multidisciplinary citation indexing. The difficulties to handle different formats of references and footnotes, non-Anglo-American names, and of publications in non-English languages were known to the pioneers of citation indexing. Nowadays the huge for-profit North-American media corporation Thomson Reuters is the owner of the citation databases founded by Garfield. Thomson Reuters’ influence on funding decisions, individual careers, departments, universities, disciplines and countries is immense and ambivalent. Huge technological systems show a heavy inertness. This insight of technology studies is applicable to the large citation indexes by Thomson Reuters, too
    corecore