23 research outputs found

    The use of bibliometrics for assessing research : possibilities, limitations and adverse effects

    Get PDF
    Researchers are used to being evaluated: publications, hiring, tenure and funding decisions are all based on the evaluation of research. Traditionally, this evaluation relied on judgement of peers but, in the light of limited resources and increased bureaucratization of science, peer review is getting more and more replaced or complemented with bibliometric methods. Central to the introduction of bibliometrics in research evaluation was the creation of the Science Citation Index (SCI)in the 1960s, a citation database initially developed for the retrieval of scientific information. Embedded in this database was the Impact Factor, first used as a tool for the selection of journals to cover in the SCI, which then became a synonym for journal quality and academic prestige. Over the last 10 years, this indicator became powerful enough to influence researchers’ publication patterns in so far as it became one of the most important criteria to select a publication venue. Regardless of its many flaws as a journal metric and its inadequacy as a predictor of citations on the paper level, it became the go-to indicator of research quality and was used and misused by authors, editors, publishers and research policy makers alike. The h-index, introduced as an indicator of both output and impact combined in one simple number, has experienced a similar fate, mainly due to simplicity and availability. Despite their massive use, these measures are too simple to capture the complexity and multiple dimensions of research output and impact. This chapter provides an overview of bibliometric methods, from the development of citation indexing as a tool for information retrieval to its application in research evaluation, and discusses their misuse and effects on researchers’ scholarly communication behavior

    Identification of research communities in cited and uncited publications using a co-authorship network

    Get PDF
    Patterns of co-authorship provide an effective means of probing the structures of research communities. In this paper, we use the CiteSpace social network tool and co-authorship data from the Web of Science to analyse two such types of community. The first type is based on the cited publications of a group of highly productive authors in a particular discipline, and the second on the uncited publications of those highly productive authors. These pairs of communities were generated for three different countries—the People’s Republic of China (PRC), the United Kingdom (UK) and the United States of America (USA)—and for four different disciplines (as denoted by Web of Science subject categories)—Chemistry Organic, Engineering Environmental, Economics, and Management. In the case of the UK and USA, the structures of the cited and uncited communities in each of the four disciplines were markedly different from each other; in the case of the PRC, conversely, the cited and uncited PRC communities had broadly similar structures that were characterised by large groups of connected authors. We suggest that this may arise from a greater degree of guest or honorary authorship in the PRC than in the UK or the USA
    corecore