30 research outputs found

    Searching for converging research using field to field citations

    Get PDF
    We define converging research as the emergence of an interdisciplinary research area from fields that did not show interdisciplinary connections before. This paper presents a process to search for converging research using journal subject categories as a proxy for fields and citations to measure interdisciplinary connections, as well as an application of this search. The search consists of two phases: a quantitative phase in which pairs of citing and cited fields are located that show a significant change in number of citations, followed by a qualitative phase in which thematic focus is sought in publications associated with located pairs. Applying this search on publications from the Web of Science published between 1995 and 2005, 38 candidate converging pairs were located, 27 of which showed thematic focus, and 20 also showed a similar focus in the other, reciprocal pair

    On the correlation between bibliometric indicators and peer review: reply to Opthof and Leydesdorff

    Get PDF
    Opthof and Leydesdorff (Scientometrics, 2011) reanalyze data reported by Van Raan (Scientometrics 67(3):491–502, 2006) and conclude that there is no significant correlation between on the one hand average citation scores measured using the CPP/FCSm indicator and on the other hand the quality judgment of peers. We point out that Opthof and Leydesdorff draw their conclusions based on a very limited amount of data. We also criticize the statistical methodology used by Opthof and Leydesdorff. Using a larger amount of data and a more appropriate statistical methodology, we do find a significant correlation between the CPP/FCSm indicator and peer judgment

    Peer review quality and transparency of the peer-review process in open access and subscription journals

    Get PDF
    BACKGROUND:Recent controversies highlighting substandard peer review in Open Access (OA) and traditional (subscription) journals have increased the need for authors, funders, publishers, and institutions to assure quality of peer-review in academic journals. I propose that transparency of the peer-review process may be seen as an indicator of the quality of peer-review, and develop and validate a tool enabling different stakeholders to assess transparency of the peer-review process. METHODS AND FINDINGS:Based on editorial guidelines and best practices, I developed a 14-item tool to rate transparency of the peer-review process on the basis of journals' websites. In Study 1, a random sample of 231 authors of papers in 92 subscription journals in different fields rated transparency of the journals that published their work. Authors' ratings of the transparency were positively associated with quality of the peer-review process but unrelated to journal's impact factors. In Study 2, 20 experts on OA publishing assessed the transparency of established (non-OA) journals, OA journals categorized as being published by potential predatory publishers, and journals from the Directory of Open Access Journals (DOAJ). Results show high reliability across items (α = .91) and sufficient reliability across raters. Ratings differentiated the three types of journals well. In Study 3, academic librarians rated a random sample of 140 DOAJ journals and another 54 journals that had received a hoax paper written by Bohannon to test peer-review quality. Journals with higher transparency ratings were less likely to accept the flawed paper and showed higher impact as measured by the h5 index from Google Scholar. CONCLUSIONS:The tool to assess transparency of the peer-review process at academic journals shows promising reliability and validity. The transparency of the peer-review process can be seen as an indicator of peer-review quality allowing the tool to be used to predict academic quality in new journals

    The use of bibliometrics for assessing research : possibilities, limitations and adverse effects

    Get PDF
    Researchers are used to being evaluated: publications, hiring, tenure and funding decisions are all based on the evaluation of research. Traditionally, this evaluation relied on judgement of peers but, in the light of limited resources and increased bureaucratization of science, peer review is getting more and more replaced or complemented with bibliometric methods. Central to the introduction of bibliometrics in research evaluation was the creation of the Science Citation Index (SCI)in the 1960s, a citation database initially developed for the retrieval of scientific information. Embedded in this database was the Impact Factor, first used as a tool for the selection of journals to cover in the SCI, which then became a synonym for journal quality and academic prestige. Over the last 10 years, this indicator became powerful enough to influence researchers’ publication patterns in so far as it became one of the most important criteria to select a publication venue. Regardless of its many flaws as a journal metric and its inadequacy as a predictor of citations on the paper level, it became the go-to indicator of research quality and was used and misused by authors, editors, publishers and research policy makers alike. The h-index, introduced as an indicator of both output and impact combined in one simple number, has experienced a similar fate, mainly due to simplicity and availability. Despite their massive use, these measures are too simple to capture the complexity and multiple dimensions of research output and impact. This chapter provides an overview of bibliometric methods, from the development of citation indexing as a tool for information retrieval to its application in research evaluation, and discusses their misuse and effects on researchers’ scholarly communication behavior
    corecore