68 research outputs found

    Indicating interdisciplinarity: A multidimensional framework to characterize Interdisciplinary Knowledge Flow (IKF)

    Full text link
    This study contributes to the recent discussions on indicating interdisciplinarity, i.e., going beyond mere metrics of interdisciplinarity. We propose a multi-dimensional and contextual framework to improve the granularity and usability of the existing methodology for quantifying the interdisciplinary knowledge flow (IKF) in which scientific disciplines import and export knowledge from/to other disciplines. To characterize the knowledge exchange between disciplines, we recognize three dimensions under this framework, namely, broadness, intensity, and heterogeneity. We show that each dimension covers a different aspect of IKF, especially between disciplines with the largest volume of IKF, and can assist in uncovering different types of interdisciplinarity. We apply this framework in two use cases, one at the level of disciplines and one at the level of journals, to show how it can offer a more holistic and detailed viewpoint on the interdisciplinarity of scientific entities than plain citation counts. We further compare our proposed framework, an indicating process, with established indicators and discuss how such information tools on interdisciplinarity can assist science policy practices such as performance-based research funding systems and panel-based peer review processes

    Identifying publications in questionable journals in the context of performance-based research funding

    Get PDF
    In this article we discuss the five yearly screenings for publications in questionable journals which have been carried out in the context of the performance-based research funding model in Flanders, Belgium. The Flemish funding model expanded from 2010 onwards, with a comprehensive bibliographic database for research output in the social sciences and humanities. Along with an overview of the procedures followed during the screenings for articles in questionable journals submitted for inclusion in this database, we present a bibliographic analysis of the publications identified. First, we show how the yearly number of publications in questionable journals has evolved over the period 2003–2016. Second, we present a disciplinary classification of the identified journals. In the third part of the results section, three authorship characteristics are discussed: multi-authorship, the seniority–or experience level–of authors in general and of the first author in particular, and the relation of the disciplinary scope of the journal (cognitive classification) with the departmental affiliation of the authors (organizational classification). Our results regarding yearly rates of publications in questionable journals indicate that awareness of the risks of questionable journals does not lead to a turn away from open access in general. The number of publications in open access journals rises every year, while the number of publications in questionable journals decreases from 2012 onwards. We find further that both early career and more senior researchers publish in questionable journals. We show that the average proportion of senior authors contributing to publications in questionable journals is somewhat higher than that for publications in open access journals. In addition, this paper yields insight into the extent to which publications in questionable journals pose a threat to the public and political legitimacy of a performance-based research funding system of a western European region. We include concrete suggestions for those tasked with maintaining bibliographic databases and screening for publications in questionable journals

    Measuring the match between evaluators and evaluees: Cognitive distances between panel members and research groups at the journal level

    Get PDF
    When research groups are evaluated by an expert panel, it is an open question how one can determine the match between panel and research groups. In this paper, we outline two quantitative approaches that determine the cognitive distance between evaluators and evaluees, based on the journals they have published in. We use example data from four research evaluations carried out between 2009 and 2014 at the University of Antwerp. While the barycenter approach is based on a journal map, the similarity-adapted publication vector (SAPV) approach is based on the full journal similarity matrix. Both approaches determine an entity's profile based on the journals in which it has published. Subsequently, we determine the Euclidean distance between the barycenter or SAPV profiles of two entities as an indicator of the cognitive distance between them. Using a bootstrapping approach, we determine confidence intervals for these distances. As such, the present article constitutes a refinement of a previous proposal that operates on the level of Web of Science subject categories

    Corrigendum to “Is the expertise of evaluation panels congruent with the research interests of the research groups: a quantitative approach based on barycenters” [Journal of Informetrics 9(4) (2015) 704-721]

    Get PDF
    In Rahman, Guns, Rousseau, and Engels (2015) we described several approaches to determine the cognitive distance between two units. One of these approaches was based on what we called barycenters in N dimensions. This note corrects this terminology and introduces the more adequate term ‘similarity-adapted publication vectors’

    Cognitive distances between evaluators and evaluees in research evaluation : a comparison between three informetric methods at the journal and subject category aggregation level

    Get PDF
    This article compares six informetric approaches to determine cognitive distances between the publications of panel members (PMs) and those of research groups in discipline-specific research evaluation. We used data collected in the framework of six completed research evaluations from the period 2009–2014 at the University of Antwerp as a test case. We distinguish between two levels of aggregation—Web of Science Subject Categories and journals—and three methods: while the barycenter method (2-dimensional) is based on global maps of science, the similarity-adapted publication vector (SAPV) method and weighted cosine similarity (WCS) method (both in higher dimensions) use a full similarity matrix. In total, this leads to six different approaches, all of which are based on the publication profile of research groups and PMs. We use Euclidean distances between barycenters and SAPVs, as well as values of WCS between PMs and research groups as indicators of cognitive distance. We systematically compare how these six approaches are related. The results show that the level of aggregation has minor influence on determining cognitive distances, but dimensionality (two versus a high number of dimensions) has a greater influence. The SAPV and WCS methods agree in most cases at both levels of aggregation on which PM has the closest cognitive distance to the group to be evaluated, whereas the barycenter approaches often differ. Comparing the results of the methods to the main assessor that was assigned to each research group, we find that the barycenter method usually scores better. However,the barycenter method is less discriminatory and suggests more potential evaluators, whereas SAPV and WCS are more precise

    Predatory Open Access journals: A review of past screenings within the Flemish performance based research funding system (2014 – 2018)

    Get PDF
    From 2013 – 2014 onwards, our group (ECOOM - UAntwerpen) has been monitoring Predatory Open Access publication patterns in Flemish (Belgium) SSH scholarship. In light of the Flemish Performance Based Research Funding System, these screening exercises are conducted to assist university review boards with the decision-making processes concerning what is and what is not to be considered a peer reviewed periodical. Each year, the results of these monitoring exercises than, are published in as a report, and presented to the Authoritative Penal. In the introductory part of this essay, we will present a general background against which these yearly screenings emerged. Second, we will present the sources used and the methods deployed for the yearly screenings. Thereafter, we will shortly present the yearly results these exercises yielded. In the third section, we present a more comprehensive analysis of the results. We conclude with reflecting on the past exercises and the findings presented in this report, and discuss some implications for colleagues and scholars manoeuvring through the contemporary journal landscape

    How to identify peer-reviewed publications: Open-identity labels in scholarly book publishing

    Get PDF
    This work was supported by the Ministry of Science and Higher Education in Poland (https://www.gov.pl/nauka/) within the DIALOG Programme: the project title ‘Research into Excellence Patterns in Science and Art’. Tim Engels and Raf Guns thank the Flemish government for its funding of the Center for R&D Monitoring (ECOOM). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.This article discusses the open-identity label, i.e., the practice of disclosing reviewers’ names in published scholarly books, a common practice in Central and Eastern European countries. This study’s objective is to verify whether the open-identity label is a type of peer-review label (like those used in Finland and Flanders, i.e., the Flemish part of Belgium), and as such, whether it can be used as a delineation criterion in various systems used to evaluate scholarly publications. We have conducted a two-phase sequential explanatory study. In the first phase, interviews with 20 of the 40 largest Polish publishers of scholarly books were conducted to investigate how Polish publishers control peer reviews and whether the open-identity label can be used to identify peer-reviewed books. In the other phase, two questionnaires were used to analyze perceptions of peer-review and open-identity labelling among authors (n = 600) and reviewers (n = 875) of books published by these 20 publishers. Integrated results allowed us to verify publishers’ claims concerning their peer-review practices. Our findings reveal that publishers actually control peer reviews by providing assessment criteria to reviewers and sending reviews to authors. Publishers rarely ask for permission to disclose reviewers’ names, but it is obvious to reviewers that this practice of disclosing names is part of peer reviewing. This study also shows that only the names of reviewers who accepted manuscripts for publication are disclosed. Thus, most importantly, our analysis shows that the open-identity label that Polish publishers use is a type of peer-review label like those used in Flanders and Finland, and as such, it can be used to identify peer-reviewed scholarly books

    Journal article publishing in the social sciences and humanities: a comparison of Web of Science coverage for five European countries

    Full text link
    This study compares publication pattern dynamics in the social sciences and humanities in five European countries. Three are Central and Eastern European countries that share a similar cultural and political heritage (the Czech Republic, Slovakia, and Poland). The other two are Flanders (Belgium) and Norway, representing Western Europe and the Nordics, respectively. We analysed 449,409 publications from 2013-2016 and found that, despite persisting differences between the two groups of countries across all disciplines, publication patterns in the Central and Eastern European countries are becoming more similar to those in their Western and Nordic counterparts. Articles from the Central and Eastern European countries are increasingly published in journals indexed in Web of Science and also in journals with the highest citation impacts. There are, however, clear differences between social science and humanities disciplines, which need to be considered in research evaluation and science policy
    • 

    corecore