225 research outputs found

    A Rejoinder on Energy versus Impact Indicators

    Get PDF
    Citation distributions are so skewed that using the mean or any other central tendency measure is ill-advised. Unlike G. Prathap's scalar measures (Energy, Exergy, and Entropy or EEE), the Integrated Impact Indicator (I3) is based on non-parametric statistics using the (100) percentiles of the distribution. Observed values can be tested against expected ones; impact can be qualified at the article level and then aggregated.Comment: Scientometrics, in pres

    Towards a new crown indicator: an empirical analysis

    Get PDF
    We present an empirical comparison between two normalization mechanisms for citation-based indicators of research performance. These mechanisms aim to normalize citation counts for the field and the year in which a publication was published. One mechanism is applied in the current so-called crown indicator of our institute. The other mechanism is applied in the new crown indicator that our institute is currently exploring. We find that at high aggregation levels, such as at the level of large research institutions or at the level of countries, the differences between the two mechanisms are very small. At lower aggregation levels, such as at the level of research groups or at the level of journals, the differences between the two mechanisms are somewhat larger. We pay special attention to the way in which recent publications are handled. These publications typically have very low citation counts and should therefore be handled with special care

    Differences in citation frequency of clinical and basic science papers in cardiovascular research

    Get PDF
    In this article, a critical analysis is performed on differences in citation frequency of basic and clinical cardiovascular papers. It appears that the latter papers are cited at about 40% higher frequency. The differences between the largest number of citations of the most cited papers are even larger. It is also demonstrated that the groups of clinical and basic cardiovascular papers are also heterogeneous concerning citation frequency. It is concluded that none of the existing citation indicators appreciates these differences. At this moment these indicators should not be used for quality assessment of individual scientists and scientific niches with small numbers of scientists

    An index to quantify an individual's scientific research output that takes into account the effect of multiple coauthorship

    Full text link
    I propose the index \hbar ("hbar"), defined as the number of papers of an individual that have citation count larger than or equal to the \hbar of all coauthors of each paper, as a useful index to characterize the scientific output of a researcher that takes into account the effect of multiple coauthorship. The bar is higher for \hbar.Comment: A few minor changes from v1. To be published in Scientometric

    Proposals for evaluating the regularity of a scientist'sresearch output

    No full text
    Evaluating the career of individual scientists according to their scientific output is a common bibliometric problem. Two aspects are classically taken into account: overall productivity and overall diffusion/impact, which can be measured by a plethora of indicators that consider publications and/or citations separately or synthesise these two quantities into a single number (e.g. h-index). A secondary aspect, which is sometimes mentioned in the rules of competitive examinations for research position/promotion, is time regularity of one researcher's scientific output. Despite the fact that it is sometimes invoked, a clear definition of regularity is still lacking. We define it as the ability of generating an active and stable research output over time, in terms of both publications/ quantity and citations/diffusion. The goal of this paper is introducing three analysis tools to perform qualitative/quantitative evaluations on the regularity of one scientist's output in a simple and organic way. These tools are respectively (1) the PY/CY diagram, (2) the publication/citation Ferrers diagram and (3) a simplified procedure for comparing the research output of several scientists according to their publication and citation temporal distributions (Borda's ranking). Description of these tools is supported by several examples

    The substantive and practical significance of citation impact differences between institutions: Guidelines for the analysis of percentiles using effect sizes and confidence intervals

    Full text link
    In our chapter we address the statistical analysis of percentiles: How should the citation impact of institutions be compared? In educational and psychological testing, percentiles are already used widely as a standard to evaluate an individual's test scores - intelligence tests for example - by comparing them with the percentiles of a calibrated sample. Percentiles, or percentile rank classes, are also a very suitable method for bibliometrics to normalize citations of publications in terms of the subject category and the publication year and, unlike the mean-based indicators (the relative citation rates), percentiles are scarcely affected by skewed distributions of citations. The percentile of a certain publication provides information about the citation impact this publication has achieved in comparison to other similar publications in the same subject category and publication year. Analyses of percentiles, however, have not always been presented in the most effective and meaningful way. New APA guidelines (American Psychological Association, 2010) suggest a lesser emphasis on significance tests and a greater emphasis on the substantive and practical significance of findings. Drawing on work by Cumming (2012) we show how examinations of effect sizes (e.g. Cohen's d statistic) and confidence intervals can lead to a clear understanding of citation impact differences

    Peer review quality and transparency of the peer-review process in open access and subscription journals

    Get PDF
    BACKGROUND:Recent controversies highlighting substandard peer review in Open Access (OA) and traditional (subscription) journals have increased the need for authors, funders, publishers, and institutions to assure quality of peer-review in academic journals. I propose that transparency of the peer-review process may be seen as an indicator of the quality of peer-review, and develop and validate a tool enabling different stakeholders to assess transparency of the peer-review process. METHODS AND FINDINGS:Based on editorial guidelines and best practices, I developed a 14-item tool to rate transparency of the peer-review process on the basis of journals' websites. In Study 1, a random sample of 231 authors of papers in 92 subscription journals in different fields rated transparency of the journals that published their work. Authors' ratings of the transparency were positively associated with quality of the peer-review process but unrelated to journal's impact factors. In Study 2, 20 experts on OA publishing assessed the transparency of established (non-OA) journals, OA journals categorized as being published by potential predatory publishers, and journals from the Directory of Open Access Journals (DOAJ). Results show high reliability across items (α = .91) and sufficient reliability across raters. Ratings differentiated the three types of journals well. In Study 3, academic librarians rated a random sample of 140 DOAJ journals and another 54 journals that had received a hoax paper written by Bohannon to test peer-review quality. Journals with higher transparency ratings were less likely to accept the flawed paper and showed higher impact as measured by the h5 index from Google Scholar. CONCLUSIONS:The tool to assess transparency of the peer-review process at academic journals shows promising reliability and validity. The transparency of the peer-review process can be seen as an indicator of peer-review quality allowing the tool to be used to predict academic quality in new journals

    International ranking systems for universities and institutions: a critical appraisal

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Ranking of universities and institutions has attracted wide attention recently. Several systems have been proposed that attempt to rank academic institutions worldwide.</p> <p>Methods</p> <p>We review the two most publicly visible ranking systems, the Shanghai Jiao Tong University 'Academic Ranking of World Universities' and the Times Higher Education Supplement 'World University Rankings' and also briefly review other ranking systems that use different criteria. We assess the construct validity for educational and research excellence and the measurement validity of each of the proposed ranking criteria, and try to identify generic challenges in international ranking of universities and institutions.</p> <p>Results</p> <p>None of the reviewed criteria for international ranking seems to have very good construct validity for both educational and research excellence, and most don't have very good construct validity even for just one of these two aspects of excellence. Measurement error for many items is also considerable or is not possible to determine due to lack of publication of the relevant data and methodology details. The concordance between the 2006 rankings by Shanghai and Times is modest at best, with only 133 universities shared in their top 200 lists. The examination of the existing international ranking systems suggests that generic challenges include adjustment for institutional size, definition of institutions, implications of average measurements of excellence versus measurements of extremes, adjustments for scientific field, time frame of measurement and allocation of credit for excellence.</p> <p>Conclusion</p> <p>Naïve lists of international institutional rankings that do not address these fundamental challenges with transparent methods are misleading and should be abandoned. We make some suggestions on how focused and standardized evaluations of excellence could be improved and placed in proper context.</p

    Generalized h-index for Disclosing Latent Facts in Citation Networks

    Full text link
    What is the value of a scientist and its impact upon the scientific thinking? How can we measure the prestige of a journal or of a conference? The evaluation of the scientific work of a scientist and the estimation of the quality of a journal or conference has long attracted significant interest, due to the benefits from obtaining an unbiased and fair criterion. Although it appears to be simple, defining a quality metric is not an easy task. To overcome the disadvantages of the present metrics used for ranking scientists and journals, J.E. Hirsch proposed a pioneering metric, the now famous h-index. In this article, we demonstrate several inefficiencies of this index and develop a pair of generalizations and effective variants of it to deal with scientist ranking and with publication forum ranking. The new citation indices are able to disclose trendsetters in scientific research, as well as researchers that constantly shape their field with their influential work, no matter how old they are. We exhibit the effectiveness and the benefits of the new indices to unfold the full potential of the h-index, with extensive experimental results obtained from DBLP, a widely known on-line digital library.Comment: 19 pages, 17 tables, 27 figure
    corecore