41 research outputs found

    Metrics to evaluate research performance in academic institutions: A critique of ERA 2010 as applied in forestry and the indirect H2 index as a possible alternative

    Full text link
    Excellence for Research in Australia (ERA) is an attempt by the Australian Research Council to rate Australian universities on a 5-point scale within 180 Fields of Research using metrics and peer evaluation by an evaluation committee. Some of the bibliometric data contributing to this ranking suffer statistical issues associated with skewed distributions. Other data are standardised year-by-year, placing undue emphasis on the most recent publications which may not yet have reliable citation patterns. The bibliometric data offered to the evaluation committees is extensive, but lacks effective syntheses such as the h-index and its variants. The indirect H2 index is objective, can be computed automatically and efficiently, is resistant to manipulation, and a good indicator of impact to assist the ERA evaluation committees and to similar evaluations internationally.Comment: 19 pages, 6 figures, 7 tables, appendice

    Bowling Together: Scientific Collaboration Networks of Demographers at European Population Conferences

    Get PDF
    Studies of collaborative networks of demographers are relatively scarce. Similar studies in other social sciences provide insight into scholarly trends of both the fields and characteristics of their successful scientists. Exploiting a unique database of metadata for papers presented at six European Population Conferences, this report explores factors explaining research collaboration among demographers. We find that (1) collaboration among demographers has increased over the past 10 years, however, among co-authored papers, collaboration across institutions remains relatively unchanged over the period, (2) papers based on core demographic subfields such as fertility, mortality, migration and data and methods are more likely to involve multiple authors and (3) multiple author teams that are all female are less likely to co-author with colleagues in different institutions. Potential explanations for these results are discussed alongside comparisons with similar studies of collaboration networks in other related social sciences

    The use of bibliometrics for assessing research : possibilities, limitations and adverse effects

    Get PDF
    Researchers are used to being evaluated: publications, hiring, tenure and funding decisions are all based on the evaluation of research. Traditionally, this evaluation relied on judgement of peers but, in the light of limited resources and increased bureaucratization of science, peer review is getting more and more replaced or complemented with bibliometric methods. Central to the introduction of bibliometrics in research evaluation was the creation of the Science Citation Index (SCI)in the 1960s, a citation database initially developed for the retrieval of scientific information. Embedded in this database was the Impact Factor, first used as a tool for the selection of journals to cover in the SCI, which then became a synonym for journal quality and academic prestige. Over the last 10 years, this indicator became powerful enough to influence researchers’ publication patterns in so far as it became one of the most important criteria to select a publication venue. Regardless of its many flaws as a journal metric and its inadequacy as a predictor of citations on the paper level, it became the go-to indicator of research quality and was used and misused by authors, editors, publishers and research policy makers alike. The h-index, introduced as an indicator of both output and impact combined in one simple number, has experienced a similar fate, mainly due to simplicity and availability. Despite their massive use, these measures are too simple to capture the complexity and multiple dimensions of research output and impact. This chapter provides an overview of bibliometric methods, from the development of citation indexing as a tool for information retrieval to its application in research evaluation, and discusses their misuse and effects on researchers’ scholarly communication behavior

    A multi-disciplinary perspective on emergent and future innovations in peer review [version 2; referees: 2 approved]

    Get PDF
    Peer review of research articles is a core part of our scholarly communication system. In spite of its importance, the status and purpose of peer review is often contested. What is its role in our modern digital research and communications infrastructure? Does it perform to the high standards with which it is generally regarded? Studies of peer review have shown that it is prone to bias and abuse in numerous dimensions, frequently unreliable, and can fail to detect even fraudulent research. With the advent of web technologies, we are now witnessing a phase of innovation and experimentation in our approaches to peer review. These developments prompted us to examine emerging models of peer review from a range of disciplines and venues, and to ask how they might address some of the issues with our current systems of peer review. We examine the functionality of a range of social Web platforms, and compare these with the traits underlying a viable peer review system: quality control, quantified performance metrics as engagement incentives, and certification and reputation. Ideally, any new systems will demonstrate that they out-perform and reduce the biases of existing models as much as possible. We conclude that there is considerable scope for new peer review initiatives to be developed, each with their own potential issues and advantages. We also propose a novel hybrid platform model that could, at least partially, resolve many of the socio-technical issues associated with peer review, and potentially disrupt the entire scholarly communication system. Success for any such development relies on reaching a critical threshold of research community engagement with both the process and the platform, and therefore cannot be achieved without a significant change of incentives in research environments

    The effects of aging of scientists on their publication and citation patterns

    Get PDF
    The average age at which U.S. researchers get their first grant from NIH has increased from 34.3 in 1970, to 41.7 in 2004. These data raise the crucial question of the effects of aging on the scientific creativity and productivity of researchers. Those who worry about the aging of scientists usually believe that the younger they are the more creative and productive they will be. Using a large population of 13,680 university professors in Quebec, we show that, while scientific productivity rises sharply between 28 and 40, it increases at a slower pace between 41 and 50 and stabilizes afterward until retirement for the most active researchers. The average scientific impact per paper decreases linearly until 50-55 years old, but the average number of papers in highly cited journals and among highly cited papers rises continuously until retirement. Our results clearly show for the first time the natural history of the scientific productivity of scientists over their entire career and bring to light the fact that researchers over 55 still contribute significantly to the scientific community by producing high impact papers.Comment: 12 pages, 4 figure

    A multi-disciplinary perspective on emergent and future innovations in peer review

    Get PDF
    Peer review of research articles is a core part of our scholarly communication system. In spite of its importance, the status and purpose of peer review is often contested. What is its role in our modern digital research and communications infrastructure? Does it perform to the high standards with which it is generally regarded? Studies of peer review have shown that it is prone to bias and abuse in numerous dimensions, frequently unreliable, and can fail to detect even fraudulent research. With the advent of web technologies, we are now witnessing a phase of innovation and experimentation in our approaches to peer review. These developments prompted us to examine emerging models of peer review from a range of disciplines and venues, and to ask how they might address some of the issues with our current systems of peer review. We examine the functionality of a range of social Web platforms, and compare these with the traits underlying a viable peer review system: quality control, quantified performance metrics as engagement incentives, and certification and reputation. Ideally, any new systems will demonstrate that they out-perform and reduce the biases of existing models as much as possible. We conclude that there is considerable scope for new peer review initiatives to be developed, each with their own potential issues and advantages. We also propose a novel hybrid platform model that could, at least partially, resolve many of the socio-technical issues associated with peer review, and potentially disrupt the entire scholarly communication system. Success for any such development relies on reaching a critical threshold of research community engagement with both the process and the platform, and therefore cannot be achieved without a significant change of incentives in research environments

    When Replication is Prohibited

    No full text
    International audienc
    corecore