46,173 research outputs found

    Ranking scientists and departments in a consistent manner

    Get PDF
    The standard data that we use when computing bibliometric rankings of scientists are just their publication/citation records, i.e., so many papers with 0 citation, so many with 1 citation, so many with 2 citations, etc. The standard data for bibliometric rankings of departments have the same structure. It is therefore tempting (and many authors gave in to temptation) to use the same method for computing rankings of scientists and rankings of departments. Depending on the method, this can yield quite surprising and unpleasant results. Indeed, with some methods, it may happen that the "best" department contains the "worst" scientists, and only them. This problem will not occur if the rankings satisfy a property called consistency, recently introduced in the literature. In this paper, we explore the consequences of consistency and we characterize two families of consistent rankings.Bibliometrics, ranking of scientists, ranking of departments

    The Distribution of the Asymptotic Number of Citations to Sets of Publications by a Researcher or From an Academic Department Are Consistent With a Discrete Lognormal Model

    Full text link
    How to quantify the impact of a researcher's or an institution's body of work is a matter of increasing importance to scientists, funding agencies, and hiring committees. The use of bibliometric indicators, such as the h-index or the Journal Impact Factor, have become widespread despite their known limitations. We argue that most existing bibliometric indicators are inconsistent, biased, and, worst of all, susceptible to manipulation. Here, we pursue a principled approach to the development of an indicator to quantify the scientific impact of both individual researchers and research institutions grounded on the functional form of the distribution of the asymptotic number of citations. We validate our approach using the publication records of 1,283 researchers from seven scientific and engineering disciplines and the chemistry departments at the 106 U.S. research institutions classified as "very high research activity". Our approach has three distinct advantages. First, it accurately captures the overall scientific impact of researchers at all career stages, as measured by asymptotic citation counts. Second, unlike other measures, our indicator is resistant to manipulation and rewards publication quality over quantity. Third, our approach captures the time-evolution of the scientific impact of research institutions.Comment: 20 pages, 11 figures, 3 table

    Evaluation of Researchers: A Life Cycle Analysis of German Academic Economists

    Get PDF
    In this paper we ague that any meaningful bibliometric evaluation of researchers needs to take into account that research productivity follows distinct life cycles. Using an encompassing data set portraying the research behavior of German academic economists, we first show that research productivity crucially depends on career age and vintage. Based on the identified effects, we develop a simple formula that shows how a researcher’s performance compares to that of his or her peers. This kind of information may serve as an input for performance-related remuneration and track-record based allocation of research grants. We then go on to investigate the persistence of individual productivity. The Persistence issue is of special importance in the academic labor market because of the irrevocable nature of tenure. Finally, we show how life cycle considerations can be used in evaluations of university departments in order to render the resulting rankings insensitive to the age structure of the evaluated faculties.research productivity, performance evaluation, life cycles, rankings

    Research evaluation and journal quality weights: Much ado about nothing?

    Get PDF
    Research evaluations based on quality weighted publication output are often criticized on account of the employed journal quality weights. This study shows that evaluations of entire research organizations are very robust with respect to the choice of readily available weighting schemes. We document this robustness by applying rather different weighting schemes to otherwise identical rankings. Our unit of analysis consists of German, Austrian and Swiss university departments in business administration and economics.Research evaluation, university management

    A fuzzy-based scoring rule for author ranking

    Get PDF
    The measurement of the quality of research has reached nowadays an increasing interest not only for scientific reasons but also for the critical problem of researchers' ranking, due to the lack of grant assignments. The most commonly used approach is based on the so-called hh-index, even if the current literature debated a lot about its pros and cons. This paper, after a brief review of the hh-index and of alternative models, focuses on the characterization and the implementation of a modified scoring rule approach by means of a fuzzy inference system a lĂ  Sugeno.Research evaluation, bibliometrics, author ranking, hh-index, scoring rules, fuzzy inference system.

    Evidence of Competition in Research Activity among Economic Department using Spatial Econometric Techniques

    Get PDF
    Despite the prevalence of both competitive forces and patterns of collaboration within academic communities, studies on research productivity generally treat universities as independent entities. By exploring the research productivity of all academic economists employed at 81 universities and 17 economic research institutes in Austria, Germany, and German-speaking Switzerland, this study determines whether a research unit’s productivity depends on that of neighboring research units. The significant negative relationship that is found implies competition for priority of discovery among individual researchers, as well as the universities and research institutes that employ them. In addition, the empirical results support the hypotheses that collaboration and the existence of economies of scale increase research productivity.Research productivity, Competition, Collaboration, Negative spatial autocorrelation, Geo-referenced point data
    • …
    corecore