1,083 research outputs found
Influential cited references in FEMS Microbiology Letters: lessons from Reference Publication Year Spectroscopy (RPYS)
The journal FEMS Microbiology Letters covers all aspects of microbiology including virology. On which scientific shoulders do the papers published in this journal stand? Which are the classic papers used by the authors? We aim to answer these questions in this study by applying the Reference Publication Year Spectroscopy (RPYS) analysis to all papers published in this journal between 1977 and 2017. In total, 16 837 publications with 410 586 cited references are analyzed. Mainly, the studies published in the journal FEMS Microbiology Letters draw knowledge from methods developed to quantify or characterize biochemical substances such as proteins, nucleic acids, lipids, or carbohydrates and from improvements of techniques suitable for studies of bacterial genetics. The techniques frequently used for studying the genetic of microorganisms in FEMS Microbiology Letters' studies were developed using samples prepared from microorganisms. Methods required for the investigation of proteins, carbohydrates, or lipids were mostly transferred from other fields of life science to microbiology
The substantive and practical significance of citation impact differences between institutions: Guidelines for the analysis of percentiles using effect sizes and confidence intervals
In our chapter we address the statistical analysis of percentiles: How should
the citation impact of institutions be compared? In educational and
psychological testing, percentiles are already used widely as a standard to
evaluate an individual's test scores - intelligence tests for example - by
comparing them with the percentiles of a calibrated sample. Percentiles, or
percentile rank classes, are also a very suitable method for bibliometrics to
normalize citations of publications in terms of the subject category and the
publication year and, unlike the mean-based indicators (the relative citation
rates), percentiles are scarcely affected by skewed distributions of citations.
The percentile of a certain publication provides information about the citation
impact this publication has achieved in comparison to other similar
publications in the same subject category and publication year. Analyses of
percentiles, however, have not always been presented in the most effective and
meaningful way. New APA guidelines (American Psychological Association, 2010)
suggest a lesser emphasis on significance tests and a greater emphasis on the
substantive and practical significance of findings. Drawing on work by Cumming
(2012) we show how examinations of effect sizes (e.g. Cohen's d statistic) and
confidence intervals can lead to a clear understanding of citation impact
differences
Metrics to evaluate research performance in academic institutions: A critique of ERA 2010 as applied in forestry and the indirect H2 index as a possible alternative
Excellence for Research in Australia (ERA) is an attempt by the Australian
Research Council to rate Australian universities on a 5-point scale within 180
Fields of Research using metrics and peer evaluation by an evaluation
committee. Some of the bibliometric data contributing to this ranking suffer
statistical issues associated with skewed distributions. Other data are
standardised year-by-year, placing undue emphasis on the most recent
publications which may not yet have reliable citation patterns. The
bibliometric data offered to the evaluation committees is extensive, but lacks
effective syntheses such as the h-index and its variants. The indirect H2 index
is objective, can be computed automatically and efficiently, is resistant to
manipulation, and a good indicator of impact to assist the ERA evaluation
committees and to similar evaluations internationally.Comment: 19 pages, 6 figures, 7 tables, appendice
A reverse engineering approach to the suppression of citation biases reveals universal properties of citation distributions
The large amount of information contained in bibliographic databases has
recently boosted the use of citations, and other indicators based on citation
numbers, as tools for the quantitative assessment of scientific research.
Citations counts are often interpreted as proxies for the scientific influence
of papers, journals, scholars, and institutions. However, a rigorous and
scientifically grounded methodology for a correct use of citation counts is
still missing. In particular, cross-disciplinary comparisons in terms of raw
citation counts systematically favors scientific disciplines with higher
citation and publication rates. Here we perform an exhaustive study of the
citation patterns of millions of papers, and derive a simple transformation of
citation counts able to suppress the disproportionate citation counts among
scientific domains. We find that the transformation is well described by a
power-law function, and that the parameter values of the transformation are
typical features of each scientific discipline. Universal properties of
citation patterns descend therefore from the fact that citation distributions
for papers in a specific field are all part of the same family of univariate
distributions.Comment: 9 pages, 6 figures. Supporting information files available at
http://filrad.homelinux.or
Quality assurance in higher education - meta-evaluationof multi-stage evaluation procedures in Germany
Systematic procedures for quality assurance and improvement through evaluation have been in place in Western Europe since the mid 1980s and in Germany since the mid 1990s. As studies in Europe and beyond show that multi-stage evaluation procedures as the main quality assurance instrument for evaluation of teaching and learning in higher education institutions have proved reliable and have gained acceptance, in Germany (as well as in other countries) the evaluation of teaching and learning through internal and external evaluations has long come under the fire of criticism. Our results of the first comprehensive and representative investigation of procedures for the evaluation of teaching and learning in Germany show that former participants in the evaluations (reviewers and those reviewed) are satisfied all in all with the multi-stage procedure. They are convinced that the goals of quality assurance and improvement were achieved. Suggestions for improving the procedures target individual aspects, such as, for example, the composition of the review panel. Against this background, it makes sense to perform regular quality assessments of the procedures for quality assurance and improvemen
An index to quantify an individual's scientific research output that takes into account the effect of multiple coauthorship
I propose the index ("hbar"), defined as the number of papers of an
individual that have citation count larger than or equal to the of all
coauthors of each paper, as a useful index to characterize the scientific
output of a researcher that takes into account the effect of multiple
coauthorship. The bar is higher for .Comment: A few minor changes from v1. To be published in Scientometric
A Rejoinder on Energy versus Impact Indicators
Citation distributions are so skewed that using the mean or any other central
tendency measure is ill-advised. Unlike G. Prathap's scalar measures (Energy,
Exergy, and Entropy or EEE), the Integrated Impact Indicator (I3) is based on
non-parametric statistics using the (100) percentiles of the distribution.
Observed values can be tested against expected ones; impact can be qualified at
the article level and then aggregated.Comment: Scientometrics, in pres
The success-index: an alternative approach to the h-index for evaluating an individual's research output
Among the most recent bibliometric indicators for normalizing the differences among fields of science in terms of citation behaviour, Kosmulski (J Informetr 5(3):481-485, 2011) proposed the NSP (number of successful paper) index. According to the authors, NSP deserves much attention for its great simplicity and immediate meaning— equivalent to those of the h-index—while it has the disadvantage of being prone to manipulation and not very efficient in terms of statistical significance. In the first part of the paper, we introduce the success-index, aimed at reducing the NSP-index's limitations, although requiring more computing effort. Next, we present a detailed analysis of the success-index from the point of view of its operational properties and a comparison with the h-index's ones. Particularly interesting is the examination of the success-index scale of measurement, which is much richer than the h-index's. This makes success-index much more versatile for different types of analysis—e.g., (cross-field) comparisons of the scientific output of (1) individual researchers, (2) researchers with different seniority, (3) research institutions of different size, (4) scientific journals, etc
Benchmarking scientific performance by decomposing leadership of Cuban and Latin American institutions in Public Health
This is a post-peer-review, pre-copyedit version of an article published in Scientometrics. The final authenticated version is available online at: http://dx.doi.org/10.1007/s11192-015-1831-z”.Comparative benchmarking with bibliometric indicators can be an aid in decision-making with regard to research management. This study aims to characterize scientific performance in a domain (Public Health) by the institutions of a country (Cuba), taking as reference world output and regional output (other Latin American centers) during the period 2003–2012. A new approach is used here to assess to what extent the leadership of a specific institution can change its citation impact. Cuba was found to have a high level of specialization and scientific leadership that does not match the low international visibility of Cuban institutions. This leading output appears mainly in non-collaborative papers, in national journals; publication in English is very scarce and the rate of international collaboration is very low. The Instituto de Medicina Tropical Pedro Kouri stands out, alone, as a national reference. Meanwhile, at the regional level, Latin American institutions deserving mention for their high autonomy in normalized citation would include Universidad de Buenos Aires (ARG), Universidade Federal de Pelotas (BRA), Consejo Nacional de Investigaciones Cientı´ficas y Te´cnicas (ARG), Instituto Oswaldo Cruz (BRA) and the Centro de Pesquisas Rene Rachou (BRA). We identify a crucial aspect that can give rise to misinterpretations of data: a high share of leadership cannot be considered positive for institutions when it is mainly associated with a high proportion of non-collaborative papers and a very low level of performance. Because leadership might be
questionable in some cases, we propose future studies to ensure a better interpretation of findings.This work was made possible through financing by the scholarship funds for international mobility between Andalusian and IberoAmerican Universities and the SCImago GroupPeer reviewe
Proposals for evaluating the regularity of a scientist'sresearch output
Evaluating the career of individual scientists according to their scientific output is a common bibliometric problem. Two aspects are classically taken into account: overall productivity and overall diffusion/impact, which can be measured by a plethora of indicators that consider publications and/or citations separately or synthesise these two quantities into a single number (e.g. h-index). A secondary aspect, which is sometimes mentioned in the rules of competitive examinations for research position/promotion, is time regularity of one researcher's scientific output. Despite the fact that it is sometimes invoked, a clear definition of regularity is still lacking. We define it as the ability of generating an active and stable research output over time, in terms of both publications/ quantity and citations/diffusion. The goal of this paper is introducing three analysis tools to perform qualitative/quantitative evaluations on the regularity of one scientist's output in a simple and organic way. These tools are respectively (1) the PY/CY diagram, (2) the publication/citation Ferrers diagram and (3) a simplified procedure for comparing the research output of several scientists according to their publication and citation temporal distributions (Borda's ranking). Description of these tools is supported by several examples
- …
