279 research outputs found

    The substantive and practical significance of citation impact differences between institutions: Guidelines for the analysis of percentiles using effect sizes and confidence intervals

    Full text link
    In our chapter we address the statistical analysis of percentiles: How should the citation impact of institutions be compared? In educational and psychological testing, percentiles are already used widely as a standard to evaluate an individual's test scores - intelligence tests for example - by comparing them with the percentiles of a calibrated sample. Percentiles, or percentile rank classes, are also a very suitable method for bibliometrics to normalize citations of publications in terms of the subject category and the publication year and, unlike the mean-based indicators (the relative citation rates), percentiles are scarcely affected by skewed distributions of citations. The percentile of a certain publication provides information about the citation impact this publication has achieved in comparison to other similar publications in the same subject category and publication year. Analyses of percentiles, however, have not always been presented in the most effective and meaningful way. New APA guidelines (American Psychological Association, 2010) suggest a lesser emphasis on significance tests and a greater emphasis on the substantive and practical significance of findings. Drawing on work by Cumming (2012) we show how examinations of effect sizes (e.g. Cohen's d statistic) and confidence intervals can lead to a clear understanding of citation impact differences

    A Rejoinder on Energy versus Impact Indicators

    Get PDF
    Citation distributions are so skewed that using the mean or any other central tendency measure is ill-advised. Unlike G. Prathap's scalar measures (Energy, Exergy, and Entropy or EEE), the Integrated Impact Indicator (I3) is based on non-parametric statistics using the (100) percentiles of the distribution. Observed values can be tested against expected ones; impact can be qualified at the article level and then aggregated.Comment: Scientometrics, in pres

    Do citations and readership identify seminal publications?

    Get PDF
    This work presents a new approach for analysing the ability of existing research metrics to identify research which has strongly influenced future developments. More specifically, we focus on the ability of citation counts and Mendeley reader counts to distinguish between publications regarded as seminal and publications regarded as literature reviews by field experts. The main motivation behind our research is to gain a better understanding of whether and how well the existing research metrics relate to research quality. For this experiment we have created a new dataset which we call TrueImpactDataset and which contains two types of publications, seminal papers and literature reviews. Using the dataset, we conduct a set of experiments to study how citation and reader counts perform in distinguishing these publication types, following the intuition that causing a change in a field signifies research quality. Our research shows that citation counts work better than a random baseline (by a margin of 10%) in distinguishing important seminal research papers from literature reviews while Mendeley reader counts do not work better than the baseline

    Benchmarking scientific performance by decomposing leadership of Cuban and Latin American institutions in Public Health

    Get PDF
    This is a post-peer-review, pre-copyedit version of an article published in Scientometrics. The final authenticated version is available online at: http://dx.doi.org/10.1007/s11192-015-1831-z”.Comparative benchmarking with bibliometric indicators can be an aid in decision-making with regard to research management. This study aims to characterize scientific performance in a domain (Public Health) by the institutions of a country (Cuba), taking as reference world output and regional output (other Latin American centers) during the period 2003–2012. A new approach is used here to assess to what extent the leadership of a specific institution can change its citation impact. Cuba was found to have a high level of specialization and scientific leadership that does not match the low international visibility of Cuban institutions. This leading output appears mainly in non-collaborative papers, in national journals; publication in English is very scarce and the rate of international collaboration is very low. The Instituto de Medicina Tropical Pedro Kouri stands out, alone, as a national reference. Meanwhile, at the regional level, Latin American institutions deserving mention for their high autonomy in normalized citation would include Universidad de Buenos Aires (ARG), Universidade Federal de Pelotas (BRA), Consejo Nacional de Investigaciones Cientı´ficas y Te´cnicas (ARG), Instituto Oswaldo Cruz (BRA) and the Centro de Pesquisas Rene Rachou (BRA). We identify a crucial aspect that can give rise to misinterpretations of data: a high share of leadership cannot be considered positive for institutions when it is mainly associated with a high proportion of non-collaborative papers and a very low level of performance. Because leadership might be questionable in some cases, we propose future studies to ensure a better interpretation of findings.This work was made possible through financing by the scholarship funds for international mobility between Andalusian and IberoAmerican Universities and the SCImago GroupPeer reviewe

    Does the Committee Peer Review Select the Best Applicants for Funding? An Investigation of the Selection Process for Two European Molecular Biology Organization Programmes

    Get PDF
    Does peer review fulfill its declared objective of identifying the best science and the best scientists? In order to answer this question we analyzed the Long-Term Fellowship and the Young Investigator programmes of the European Molecular Biology Organization. Both programmes aim to identify and support the best post doctoral fellows and young group leaders in the life sciences. We checked the association between the selection decisions and the scientific performance of the applicants. Our study involved publication and citation data for 668 applicants to the Long-Term Fellowship programme from the year 1998 (130 approved, 538 rejected) and 297 applicants to the Young Investigator programme (39 approved and 258 rejected applicants) from the years 2001 and 2002. If quantity and impact of research publications are used as a criterion for scientific achievement, the results of (zero-truncated) negative binomial models show that the peer review process indeed selects scientists who perform on a higher level than the rejected ones subsequent to application. We determined the extent of errors due to over-estimation (type I errors) and under-estimation (type 2 errors) of future scientific performance. Our statistical analyses point out that between 26% and 48% of the decisions made to award or reject an application show one of both error types. Even though for a part of the applicants, the selection committee did not correctly estimate the applicant's future performance, the results show a statistically significant association between selection decisions and the applicants' scientific achievements, if quantity and impact of research publications are used as a criterion for scientific achievement

    SURVEY OF THE DEPENDENCE ON TEMPERATURE OF THE COERCIVITY OF GARNET-FILMS

    Get PDF
    The temperature dependence of the domain-wall coercive field of epitaxial magnetic garnets films has been investigated in the entire temperature range of the ferrimagnetic phase, and has been found to be described by a set of parametric exponents. In subsequent temperature regions different slopes were observed, with breaking points whose position was found to be sample dependent. A survey ba.ed on literature Data as well as on a large number of our own samples shows the general existence of this piecewise exponential dependence and the presence of the breaking points. This type of domain-wall coercive field temperature dependence was found in all samples in the large family of the epitaxial garnets (about 30 specimens of more than ten chemical compositionsj and also in another strongly anisotropic material (TbFeCo)

    Which aspects of the open science agenda are most relevant to scientometric research and publishing? An opinion paper

    Get PDF
    © 2021 The Authors. Published by MIT Press. This is an open access article available under a Creative Commons licence. The published version can be accessed at the following link on the publisher’s website: https://doi.org/10.1162/qss_e_00121Open Science is an umbrella term that encompasses many recommendations for possible changes in research practices, management, and publishing with the objective to increase transparency and accessibility. This has become an important science policy issue that all disciplines should consider. Many Open Science recommendations may be valuable for the further development of research and publishing but not all are relevant to all fields. This opinion paper considers the aspects of Open Science that are most relevant for scientometricians, discussing how they can be usefully applied.The work of R.G. was supported by the Flemish Government through its funding of the Flemish Centre for R&D Monitoring (ECOOM

    The success-index: an alternative approach to the h-index for evaluating an individual's research output

    Get PDF
    Among the most recent bibliometric indicators for normalizing the differences among fields of science in terms of citation behaviour, Kosmulski (J Informetr 5(3):481-485, 2011) proposed the NSP (number of successful paper) index. According to the authors, NSP deserves much attention for its great simplicity and immediate meaning— equivalent to those of the h-index—while it has the disadvantage of being prone to manipulation and not very efficient in terms of statistical significance. In the first part of the paper, we introduce the success-index, aimed at reducing the NSP-index's limitations, although requiring more computing effort. Next, we present a detailed analysis of the success-index from the point of view of its operational properties and a comparison with the h-index's ones. Particularly interesting is the examination of the success-index scale of measurement, which is much richer than the h-index's. This makes success-index much more versatile for different types of analysis—e.g., (cross-field) comparisons of the scientific output of (1) individual researchers, (2) researchers with different seniority, (3) research institutions of different size, (4) scientific journals, etc

    Relationship among research collaboration, number of documents and number of citations. A case study in Spanish computer science production in 2000-2009.

    Get PDF
    This paper analyzes the relationship among research collaboration, number of documents and number of citations of computer science research activity. It analyzes the number of documents and citations and how they vary by number of authors. They are also analyzed (according to author set cardinality) under different circumstances, that is, when documents are written in different types of collaboration, when documents are published in different document types, when documents are published in different computer science subdisciplines, and, finally, when documents are published by journals with different impact factor quartiles. To investigate the above relationships, this paper analyzes the publications listed in the Web of Science and produced by active Spanish university professors between 2000 and 2009, working in the computer science field. Analyzing all documents, we show that the highest percentage of documents are published by three authors, whereas single-authored documents account for the lowest percentage. By number of citations, there is no positive association between the author cardinality and citation impact. Statistical tests show that documents written by two authors receive more citations per document and year than documents published by more authors. In contrast, results do not show statistically significant differences between documents published by two authors and one author. The research findings suggest that international collaboration results on average in publications with higher citation rates than national and institutional collaborations. We also find differences regarding citation rates between journals and conferences, across different computer science subdisciplines and journal quartiles as expected. Finally, our impression is that the collaborative level (number of authors per document) will increase in the coming years, and documents published by three or four authors will be the trend in computer science literature

    Testing bibliometric indicators by their prediction of scientists promotions

    Get PDF
    We have developed a method to obtain robust quantitative bibliometric indicators for several thousand scientists. This allows us to study the dependence of bibliometric indicators (such as number of publications, number of citations, Hirsch index...) on the age, position, etc. of CNRS scientists. Our data suggests that the normalized h index (h divided by the career length) is not constant for scientists with the same productivity but differents ages. We also compare the predictions of several bibliometric indicators on the promotions of about 600 CNRS researchers. Contrary to previous publications, our study encompasses most disciplines, and shows that no single indicator is the best predictor for all disciplines. Overall, however, the Hirsch index h provides the least bad correlations, followed by the number of papers published. It is important to realize however that even h is able to recover only half of the actual promotions. The number of citations or the mean number of citations per paper are definitely not good predictors of promotion
    corecore