12 research outputs found

    The selection of talent as a group process. A literature review on the social dynamics of decision making in grant panels

    No full text
    Talent selection within science is increasingly performed by panels, e.g. by reviewing grant or fellowship applications. Many studies from fields of sociology of science and science policy studies have been conducted to identify biases and predict outcomes of these processes, mainly focusing on characteristics of applicants, applications, and reviewers. However, as panel reviewing entails social interaction, group dynamics influence these processes. By adding insights from social psychology to current knowledge on panel reviews, we are better able to identify factors affecting talent selection and funding decisions in grant panels. By opening up this so-called black box, we aim to contribute to a better understanding of the dynamics of panel decision making. This knowledge is also relevant for various stakeholders involved in grant allocation, for applicants, reviewers, and policymakers, as it can be used to improve transparency, fairness, and legitimation of talent selection processes. Introduction The academic market in both the United States and most European countries is a buyer's market, and has been so for quite some years, given the strong preferences of many new PhDs and postdocs for a job at the university. Researchers who are lower in the academic hierarchy hold to an increasing extent temporary positions without prospect of permanent employment 1 A rationale behind project funding is that it strengthens competition between researchers, and therefore promotes the quality of science: only the best succeed. The ability to acquire research grants is turning into a prominent criterion in processes of academic recruitment and performance evaluation (De Jonge Akademie 2010; Van Arensbergen, Hessels, and van der Meulen 2013). Career grants are not only a way to directly distribute financial resources amongst young researchers to conduct research, but it also provides them indirectly with improved career opportunities as grants are considered significant indicators of excellence or talent (Van Arensbergen, Van der Weijden, and Van den Besselaar 2014). This line of reasoning is based on the assumption that grants are awarded to the best applicants. Although this obviously is what funding agencies claim, several recent studies suggest The main method used to make these allocation decisions is a combination of individual peer review and panel review (peers and other experts reviewing in a group). Originally, peer review is considered the legitimate method to evaluate scientific quality of scholarly contributions and therefore is deeply embedded in research culture. Peers are considered to be best suitable to assess scholarly quality and to distinguish inferior from meritorious research by means of critical appraisal (Hartmann and Neihardt 1990; Although peer review has been extensively studied, attempts to predict the outcomes of funding allocation processes show it still largely is a black box Reasons for installing panels mainly have to do with the size and width of the set of applications and with the weight of funding decisions. A panel of reviewers has more resources to draw on than one or two individual reviewers (information integration). And secondly, decisions made by a panel of experts (through consensus building) are considered more acceptable than individual decisions (Olbrecht and Bornmann 2010). Focus of this review The present literature study focuses on decision making as performed by panels, including individual peer review. Most of the studies on peer review stem from sociology of science (SoS) and science policy studies (SPS). 2 They mainly deal with how review outcomes are affected by performance and characteristics of individual applicants, and by characteristics of reviewers. These studies are predominantly based on analyses of written documentation (e.g. submitted proposals, review reports, and reports of meetings), interviews (e.g. with reviewers and applicants), and bibliometric data. However, many allocation (and appointment) decisions are made in panels, which are not covered very well by peer review literature. Panel review is not the same as peer review, as panelists are often not peers. Furthermore, it is embedded within group interaction, and therefore to be characterized as a social activity. For this reason, we combine SoS and SPS literature on peer review with literature on group decision making from social psychology (SP). The first part mainly focuses on how peer review affects review outcomes, whereas the latter part focuses on actual review processes. SP research predominantly deals with central mechanisms involved in decision making processes and the context in which these are carried out. To a large extent, this literature is based on experimental research. Langfeldt Methodology The literature exploited in this study mainly comes from Web of Knowledge searches, added with Google Scholar hits. We searched for literature using as main key words 'peer review', 'grant allocation', 'group decision making', 'group interaction', and 'intragroup behaviour'. Searches resulted in a broad scope of literature in terms of type of research (e.g. interview studies, bibliometric analyses, historical case study analyses, (laboratory) experiments) and in terms of potential factors influencing panel reviews. Results were refined based on six exploratory observations of panel meetings at three Dutch universities and a national research council in 2010 and 2012, in which grant applications were reviewed and ranked in preselection and selection phases. We observed several issues related to group dynamics, which seemed to influence panel processes. For example, panelists varied in their motivation and The selection of talent as a group process . 299 contribution to panel deliberation, and similar types of information (e.g. anecdotal or shared) were not always considered evenly important. Factors identified in our observations and included in our SP literature review are social status and identity, group norms and cohesiveness, information distribution, motivation and interests, and procedural factors. Next, we describe how characteristics of people or proposals under review affect review outcomes. For this, we primarily draw upon SoS and SPS literature. Second, review processes as a social interaction between various panelists are explored in more detail. Characteristics of panels and dynamics inherent to group decision making are further explained predominantly using SP literature. Finally, also based on SP literature, we look at influences of external factors related to the organizational context in which the review process is carried out. We provide a qualitative overview of the effects, but do not aim at estimating the expected effect sizes, as that would require a meta-analysis for each of the individual factors. However, doing such meta-analyses would be a useful next step. Panel review of grant applications Explicit quality-related criteria Because funding organizations claim to fund only excellent research and the best researchers, one expects in accordance with More recently, the focus has shifted from past performance to post performance analyses: do the selected applicants indeed prove to be better in the years after having received grants? Here, similar patterns of results are emerging-granted applicants have in average a better post-performance than all rejected (Bornmann, Wallon, A relevant issue here is whether choice of field is a 'non-quality-related factor', as there are more promising and less promising research topics and fields, and selection of topics may be seen as a quality of the researcher at stake. An important variable that should be taken into account is the research field of reviewers. In case of a disciplinary match between applicant and reviewer, review scores were found to be significantly higher than when there is no match Status also plays an eminent role in evaluation processes. This relates to academic status of applicants and status of their department, university, or institute. Applicants with a higher academic and/or departmental status have better chances of securing grants than applicants with relatively lower status A highly contested variable in peer review literature is gender. Related to funding decisions, it was demonstrated that women receive relatively fewer grants than men Decreasing gender disparities can be the effect of changed (council) policies, as suggested by several studies (Van den 5 As we already saw with regard to research field and affiliation, review outcomes do not solely depend on characteristics of candidates under review. Evaluation outcomes are determined by interaction between characteristics of The selection of talent as a group process . 301 reviewers and the reviewed. With regard to panel review there is another type of interaction significantly affecting the review outcomes: interaction between panelists. Therefore, we will now take a closer look at panels and describe review processes as social interaction between panelists. We will describe various factors inherent to social interaction that influence decision making processes and are subsequently expected to affect review outcomes. In the following paragraphs, we will identify factors that need to be studied in more detail to determine how they affect outcomes of grant allocation processes. Peer review as social interaction As processes of grant allocation generally involve quality assessment by panels, they can be considered to be social, emotional, and interactional processes (Lamont 2009). Panel decisions are the outcome of and are influenced by group interaction. Differences in, for example, status and expertise of the panelists can play an important role in this type of interaction. Furthermore, group interaction can make group members motivate each other and increase the amount of information that is collected and discussed, compared with individual decision making. On the other hand, group interaction can result in poorer decision making because shared responsibility creates a situation in which everyone withdraws and no one really endeavors, better known as social loafing (Levi 2007). It can also encourage members to focus primarily on reaching consensus, so they are not really motivated to detect possible weaknesses in their decisions and to realistically appraise alternative decisions. This social psychological phenomenon is better known as groupthink (Janis 1982). We will therefore look in more detail at panel review as a social interaction process. We will describe how specific characteristics related to the social nature of this process can affect panel decisions. Based on our observations, we will successively focus on the composition of the panel, group norms and cohesiveness, information distribution, and finally, we will look at the motivation and interests of panelists. Panel composition Several studies showed that outcomes of reviewing decisions to a great extent depend on who the reviewers are and how the panel is composed (e.g. Lamont 2009; Van Arensbergen, Van der Weijden, and Van den Besselaar 2014). According to In general, scholars are asked for grant panels based on their disciplinary expertise and research experience. Often applicants may enclose to their proposal names of some reviewers they definitely do not want to be part of the panel. In some cases, applicants also have the opportunity to nominate people for panel membership. Reviewers nominated by applicants are found to systematically give higher scores to all proposals than reviewers who are appointed by the board or otherwise Another aspect of panel composition is the difference in expertise represented by panelists. The set of applications generally covers a broad range of topics, sometimes even from various disciplines. Consequently, experts from different disciplines have to be included in the panel to enable a fair and comprehensive evaluation of all proposals. But also within a disciplinary panel, people can be considered experts on different topics or research areas. It is important to pay attention to panel composition, as the composition sets the potential for interaction and conflict among its members. Overlap in competences is associated with better cooperation and with open conflict between scientific experts Furthermore, within panels there may be differences in status. By this we mean the status as perceived and implicitly assigned to them by other panelists. Some people might be considered to be hotshots with very good reputation and hence have a high status. Others might be seen as newcomers or relatively insignificant in their field. These perceived status differences cause unequal power 302 . P. van Arensbergen et al. distribution amongst group members, which subsequently will disturb communication within groups. In general, high-status members talk more and receive more attention from other members. Low-status members generally talk less or even do not talk at all when their opinions deviate from those of high-status members. This can harm decision making processes because not all true opinions are expressed and high-status people will not be contradicted often. Communication plays an important role in processes of group decision making. For a group to perform well, it is desirable that group members trust each other and that there is open communication between them. This can be facilitated by good social relations within the group (Levi 2007). Finally, panel composition affects the way individual panelists identify themselves. Individuals do not have one fixed identity, but depending on the social context they are in, different identities can be addressed. Interaction between characteristics of individuals and of the specific situation determines which particular identity is activated. This process of social identity formation comprises two important activities, namely social comparison and self-categorization in terms of membership in particular groups Van Group norms and cohesiveness According to According to Distribution of information The main advantage of panel compared with individual peer review is that there is more knowledge available as all individuals' knowledge is pooled together. During panel meetings, reviewers share their expertise and inform each other about their assessments. Generally, type of information can be classified in three different ways: shared versus unshared, preference consistent versus preference inconsistent, and instrumental versus noninstrumental. In terms of shared and unshared information, the general knowledge most reviewers have about applications can be considered to be shared information, whereas any additional knowledge someone has based on his specific expertise can be considered as unshared information. An experiment using the hidden profile task showed that groups in which all information is shared make better decisions than groups in which some group members have unique information Information distribution is also affected by initial opinions or preferences of panelists. In panels characterized by divergent opinions, more information is put into deliberation than panels in which there is high agreement to start with. Furthermore, heterogeneity in opinions stimulates group members to spend more time on (information steered) deliberation and results in better group decision outcomes Finally, we discern instrumental and noninstrumental information. Information that is relevant for and ought to impact decisions is called instrumental, whereas irrelevant information that should not affect decisions is called noninstrumental. According to Bastardi and Shafir (2000), people often give noninstrumental information instrumental value without being aware of this. To base their final selection decisions on thorough evaluations, review panels collect as much information as possible, both instrumental and noninstrumental. Next, newly obtained noninstrumental information is also used to make decisions, as 'the very act of pursuing information may lead people to endow it with instrumental value' (p.217). As the mere act of obtaining adds weight to new information, disregarding its relevance, information that is known from the start might receive less attention than new information (Bastardi and Shaffir 1998). This implies that, for example, anecdotal information about applicants mentioned by panelists rather coincidently can influence review outcomes. Motivation and interests of panelist

    Design and Development of Mediated Participation for Environmental Governance Transformation: Experiences with Community Art and Visual Problem Appraisal

    No full text
    For environmental governance to be more effective and transformative, it needs to enhance the presence of experimental and innovative approaches for participation. This enhancement requires a transformation of environmental governance, as too often the (public) participation process is set up as a formal obligation in the development of a proposed intervention. This article, in search of alternatives, and in support of this transformation elaborates on spaces where participatory and deliberative governance processes have been deployed. Experiences with two mediated participation methodologies – community art and visual problem appraisal – allow a demonstration of their potential, relevance and attractiveness. Additionally, the article analyzes the challenges that result from the nature of these arts-based methodologies, from the confrontational aspects of voices overlooked in conventional approaches, and from the need to rethink professionals’ competences. Considering current environmental urgencies, mediated participation and social imaginaries still demonstrate capacities to open new avenues for action and reflection

    Nested Sequential Monte Carlo Methods

    Get PDF
    We propose nested sequential Monte Carlo (NSMC), a methodology to sample from sequences of probability distributions, even where the random variables are high-dimensional. NSMC generalises the SMC framework by requiring only approximate, properly weighted, samples from the SMC proposal distribution, while still resulting in a correct SMC algorithm. Furthermore, NSMC can in itself be used to produce such properly weighted samples. Consequently, one NSMC sampler can be used to construct an efficient high-dimensional proposal distribution for another NSMC sampler, and this nesting of the algorithm can be done to an arbitrary degree. This allows us to consider complex and high-dimensional models using SMC. We show results that motivate the efficacy of our approach on several filtering problems with dimensions in the order of 100 to 1 000
    corecore