40 research outputs found

    How do referees integrate evaluation criteria into their overall judgment? Evidence from grant peer review

    Full text link
    Little is known whether peer reviewers use the same evaluation criteria and how they integrate the criteria into their overall judgment. This study therefore proposed two assessment styles based on theoretical perspectives and normative positions. According to the case-by-case style, referees use many and different criteria, weight criteria on a case-by-case basis, and integrate criteria in a complex, non-mechanical way into their overall judgment. According to the uniform style, referees use a small fraction of the available criteria, apply the same criteria, weight the criteria in the same way, and integrate the criteria based on simple rules (i.e., fast-and-frugal heuristics). These two styles were examined using a unique dataset from a career funding scheme that contained a comparatively large number of evaluation criteria. A heuristic (fast-and-frugal trees) and a complex procedure (logistic regression) were employed to describe how referees integrate the criteria into their overall judgment. The logistic regression predicted the referees' overall assessment with high accuracy and slightly more accurately than the fast-and-frugal trees. Overall, the results of this study support the uniform style but also indicate that the uniform style needs to be revised as follows: referees use many criteria and integrate the criteria using complex rules. However, and most importantly, the revised style could describe most - but not all - of the referees' judgments. Future studies should therefore examine how referees' judgments can be characterized in those cases where the uniform style failed. Moreover, the evaluation process of referees should be studied in more empirical and theoretical detail

    Criteria for assessing grant applications: A systematic review

    Get PDF
    Criteria are an essential component of any procedure for assessing merit. Yet, little is known about the criteria peers use in assessing grant applications. In this systematic review we therefore identify and synthesize studies that examine grant peer review criteria in an empirical and inductive manner. To facilitate the synthesis, we introduce a framework that classifies what is generally referred to as 'criterion' into an evaluated entity (i.e. the object of evaluation) and an evaluation criterion (i.e. the dimension along which an entity is evaluated). In total, this synthesis includes 12 studies. Two-thirds of these studies examine criteria in the medical and health sciences, while studies in other fields are scarce. Few studies compare criteria across different fields, and none focus on criteria for interdisciplinary research. We conducted a qualitative content analysis of the 12 studies and thereby identified 15 evaluation criteria and 30 evaluated entities as well as the relations between them. Based on a network analysis, we propose a conceptualization that groups the identified evaluation criteria and evaluated entities into aims, means, and outcomes. We compare our results to criteria found in studies on research quality and guidelines of funding agencies. Since peer review is often approached from a normative perspective, we discuss our findings in relation to two normative positions, the fairness doctrine and the ideal of impartiality. Our findings suggest that future studies on criteria in grant peer review should focus on the applicant, include data from non-Western countries, and examine fields other than the medical and health sciences.Comment: Final versio

    Microsoft Academic is on the verge of becoming a bibliometric superpower

    Get PDF
    Last year, the new Microsoft Academic service was launched. Sven E. Hug and Martin P. Brändle look at how it compares with more established competitors such as Google Scholar, Scopus, and Web of Science. While there are reservations about the availability of instructions for novice users, Microsoft Academic has impressive semantic search functionality, broad coverage, structured and rich metadata, and solid citation analysis features. Moreover, accessing raw data is relatively cheap. Given these benefits and its fast pace of development, Microsoft Academic is on the verge of becoming a bibliometric superpower

    The coverage of Microsoft Academic: Analyzing the publication output of a university

    Full text link
    This is the first detailed study on the coverage of Microsoft Academic (MA). Based on the complete and verified publication list of a university, the coverage of MA was assessed and compared with two benchmark databases, Scopus and Web of Science (WoS), on the level of individual publications. Citation counts were analyzed, and issues related to data retrieval and data quality were examined. A Perl script was written to retrieve metadata from MA based on publication titles. The script is freely available on GitHub. We find that MA covers journal articles, working papers, and conference items to a substantial extent and indexes more document types than the benchmark databases (e.g., working papers, dissertations). MA clearly surpasses Scopus and WoS in covering book-related document types and conference items but falls slightly behind Scopus in journal articles. The coverage of MA is favorable for evaluative bibliometrics in most research fields, including economics/business, computer/information sciences, and mathematics. However, MA shows biases similar to Scopus and WoS with regard to the coverage of the humanities, non-English publications, and open-access publications. Rank correlations of citation counts are high between MA and the benchmark databases. We find that the publication year is correct for 89.5% of all publications and the number of authors is correct for 95.1% of the journal articles. Given the fast and ongoing development of MA, we conclude that MA is on the verge of becoming a bibliometric superpower. However, comprehensive studies on the quality of MA metadata are still lacking

    Citation Analysis with Microsoft Academic

    Full text link
    We explore if and how Microsoft Academic (MA) could be used for bibliometric analyses. First, we examine the Academic Knowledge API (AK API), an interface to access MA data, and compare it to Google Scholar (GS). Second, we perform a comparative citation analysis of researchers by normalizing data from MA and Scopus. We find that MA offers structured and rich metadata, which facilitates data retrieval, handling and processing. In addition, the AK API allows retrieving frequency distributions of citations. We consider these features to be a major advantage of MA over GS. However, we identify four main limitations regarding the available metadata. First, MA does not provide the document type of a publication. Second, the 'fields of study' are dynamic, too specific and field hierarchies are incoherent. Third, some publications are assigned to incorrect years. Fourth, the metadata of some publications did not include all authors. Nevertheless, we show that an average-based indicator (i.e. the journal normalized citation score; JNCS) as well as a distribution-based indicator (i.e. percentile rank classes; PR classes) can be calculated with relative ease using MA. Hence, normalization of citation counts is feasible with MA. The citation analyses in MA and Scopus yield uniform results. The JNCS and the PR classes are similar in both databases, and, as a consequence, the evaluation of the researchers' publication impact is congruent in MA and Scopus. Given the fast development in the last year, we postulate that MA has the potential to be used for full-fledged bibliometric analyses.Comment: preprin

    Goodbye, Microsoft Academic – hello, open research infrastructure?

    Get PDF
    The announcement of the closure of Microsoft Academic later this year, may have left the research community largely unmoved, although its demise has significant implications for those working with the service’s substantial database. Here, Aaron Tay, Alberto Martín-Martín, and Sven E. Hug¸ discuss what set Microsoft Academic apart from its competitors and the potential consequences of Microsoft’s withdrawal from scholarly metadata for the development of open research infrastructures

    Four types of research in the humanities: Setting the stage for research quality criteria in the humanities

    Get PDF
    This study presents humanities scholars' conceptions of research and subjective notions of quality in the three disciplines German literature studies, English literature studies, and art history, captured using 21 Repertory Grid interviews. We identified three dimensions that structure the scholars' conceptions of research: quality, time, and success. Further, the results revealed four types of research in the humanities: positively connoted ‘traditional' research (characterized as individual, discipline-oriented, and ground-breaking research), positively connoted ‘modern' research (cooperative, interdisciplinary, and socially relevant), negatively connoted ‘traditional' research (isolated, reproductive, and conservative), and negatively connoted ‘modern' research (career oriented, epigonal, calculated). In addition, 15 quality criteria for research in the three disciplines German literature studies, English literature studies, and art history were derived from the Repertory Grid interview

    Setting the stage for the assessment of research quality in the humanities. Consolidating the results of four empirical studies

    Get PDF
    The assessment of research performance in the humanities is an intricate and highly discussed topic. Many problems have yet to be solved, foremost the question of the humanities scholars' acceptance of evaluation tools and procedures. This article presents the results of a project funded by the Rectors' Conference of the Swiss Universities in which an approach to research evaluation in the humanities is developed that focuses on consensuality. We describe the results of four studies and integrate them into limitations and opportunities of research quality assessment in the humanities. The results indicate that while an assessment by means of quantitative indicators exhibits limitations, a research assessment by means of quality criteria presents opportunities to evaluate humanities research and make it visible. Indicators that are linked to the humanities scholars' notions of quality can be used to support peers in the evaluation process (informed peer review)
    corecore