11 research outputs found

    A dual-process theory perspective to better understand judgments in assessment centers: The role of initial impressions for dimension ratings and validity

    Get PDF
    Insight into assessors' initial impressions has the potential to advance knowledge on how assessors form dimension-based judgments and on possible biases in these ratings. Therefore, this study draws on dual process theory to build and test a model that integrates assessors' dimension ratings (i.e., systematic, slow, deliberate processing mode) with their initial impressions (i.e., intuitive, fast, automatic processing mode). Data collection started with an AC where assessors provided ratings of assessees, and an online survey of assessees' supervisors who rated their job performance. In addition, two other rater pools provided initial impressions of these assessees by evaluating extracted 2-min video clips of their AC performance. Initial impressions from both of these samples were positively related to assessors' dimension ratings, which supports assumptions from dual process theory and might explain why assessors' dimensional ratings are often undifferentiated. Initial impressions did not appear to open up the doors for biases and stereotypes based upon appearance and perceptions of liking. Instead, assessors picked up information that assessees transmitted about their personality (i.e., Conscientiousness and Emotional Stability). Implications for further research on initial impressions and AC dimension ratings are discussed

    Why Do Situational Interviews Predict Performance? Is it Saying How You Would Behave or Knowing How You Should Behave?

    Get PDF
    Purpose: The present study examined two theoretical explanations for why situational interviews predict work-related performance, namely (a) that they are measures of interviewees’ behavioral intentions or (b) that they are measures of interviewees’ ability to correctly decipher situational demands. Design/Methodology/Approach: We tested these explanations with 101 students, who participated in a 2-day selection simulation. Findings: In line with the first explanation, there was considerable similarity between what participants said they would do and their actual behavior in corresponding work-related situations. However, the underlying postulated mechanism was not supported by the data. In line with the second explanation, participants’ ability to correctly decipher situational demands was related to performance in both the interview and work-related situations. Furthermore, the relationship between the interview and performance in the work-related situations was partially explained by this ability to decipher situational demands. Implications: Assessing interviewees’ ability to identify criteria might be of additional value for making selection decisions, particularly for jobs where it is essential to assess situational demands. Originality/Value: The present study made an effort to open the ‘black box’ of situational interview validity by examining two explanations for their validity. The results provided only moderate support for the first explanation. However, the second explanation was fully supported by these results

    Toward a Better Understanding of Assessment Centers: A Conceptual Review

    Full text link
    Assessment centers (ACs) are employed for selecting and developing employees and leaders. They are interpersonal at their core because they consist of interactive exercises. Minding this perspective, this review focuses on the role of the assessee, the assessor, and the AC design, as well as their interplay in the interpersonal situation of the AC. Therefore, it addresses which conceptual perspectives have increased our understanding of ACs in this context. Building on this, we review relevant empirical findings. On this basis, the review contributes to an empirically driven understanding of the interpersonal nature of ACs and provides directions for practice and future research avenues on this topic as well as on technology in ACs and cross-cultural applications

    Resume = Resume? The effects of blockchain, social media, and classical resumes on resume fraud and applicant reactions to resumes

    Full text link
    Resumes are a ubiquitous first hurdle in hiring processes. Applicants' resume fraud behavior and applicants' reactions to selection methods can therefore influence all subsequent selection stages. In addition to classical resumes, professional social media resumes and blockchain resumes emerge as alternative resume formats. In two online studies, this paper investigates whether differing characteristics of classical, social media, and blockchain resumes affect applicant fraud behavior and reactions (e.g., perceived fairness) to the resume formats. We further investigate if differing reactions consequently influence perceived organizational attractiveness of the hiring organization using the respective resume format. In a between-subjects design, Study 1 examined potential applicants' resume fraud behavior and reactions towards the resume formats. Study 2 parallels Study 1 in a sample of actual human resource managers. In both studies, the resume format had negligible effects on expected fraud behavior, with participants expecting only slightly more fraud behavior in social media resumes. In both samples, the novel resume formats triggered less favorable reactions and led to lower organizational attractiveness, calling for caution when considering novel resume formats for hiring. Finally, exploratory findings revealed that the processes through which the novel resume formats negatively affected organizational attractiveness differed between applicants and human resource managers

    Actions define a character: Assessment centers as behavior‐focused personality measures

    Full text link
    To expand our knowledge of personality assessment, this study connects research and theory related to two common selection methods: assessment centers (ACs) and personality inventories. We examine the validity of personality-based AC ratings within a multi-method framework. Drawing from the self-other knowledge asymmetry model (Vazire, 2010), we propose that AC ratings are suited to capture personality traits that are observable in social interactions, whereas other methods (i.e., self-ratings) are useful to assess more internal traits. We obtained data from two personality-based ACs, self- and other-rated personality inventories, and supervisor ratings of job performance. Confirmatory factor analyses indicated that personality-based AC ratings reflected the Big Five traits. Consistent with the self-other knowledge asymmetry model, AC ratings of more observable personality traits (Extraversion, Agreeableness, and Intellect/Openness) were correlated with inventory-based measures of these traits. AC ratings demonstrated incremental validity in predicting job performance over inventory-based personality measures for some traits (including Agreeableness, and Intellect/Openness) but self-ratings also demonstrated incremental validity over AC ratings (for Conscientiousness). This implies that different personality measures capture unique information, thereby complementing each other. Yet, AC effect sizes were modest, suggesting that running personality-based ACs is advisable only under specific circumstances

    Why do situational interviews predict performance? Is it saying how you would behave or knowing how you should behave?

    Full text link
    Purpose: The present study examined two theoretical explanations for why situational interviews predict work-related performance, namely (a) that they are measures of interviewees’ behavioral intentions or (b) that they are measures of interviewees’ ability to correctly decipher situational demands. Design/Methodology/Approach: We tested these explanations with 101 students, who participated in a 2-day selection simulation. Findings: In line with the first explanation, there was considerable similarity between what participants said they would do and their actual behavior in corresponding work-related situations. However, the underlying postulated mechanism was not supported by the data. In line with the second explanation, participants’ ability to correctly decipher situational demands was related to performance in both the interview and work-related situations. Furthermore, the relationship between the interview and performance in the work-related situations was partially explained by this ability to decipher situational demands. Implications: Assessing interviewees’ ability to identify criteria might be of additional value for making selection decisions, particularly for jobs where it is essential to assess situational demands. Originality/Value: The present study made an effort to open the ‘black box’ of situational interview validity by examining two explanations for their validity. The results provided only moderate support for the first explanation. However, the second explanation was fully supported by these results

    Do overall dimension ratings from assessment centres show external construct-related validity

    Get PDF
    There have been repeated calls for an external construct validation approach to advance our understanding of the construct-related validity of assessment centre dimension ratings beyond existing internal construct-related validity findings. Following an external construct validation approach, we examined whether linking assessment centre overall dimension ratings to ratings of the same dimensions that stem from sources external to the assessment centre provides evidence for construct-related validity of assessment centre ratings. We used data from one laboratory assessment centre sample and two field samples. External ratings of the same dimensions stemmed from assessees, assessees’ supervisors, and customers. Results converged across all three samples and showed that different dimension-same source correlations within the assessment centres were larger than same dimension-different source correlations. Moreover, confirmatory factor analyses revealed source factors but no dimension factors in the latent factor structure of overall dimension ratings from the assessment centre and from external sources. Hence, consistent results across the three samples provide no support that assessment centre overall dimension ratings and ratings of the same dimensions from other sources can be attributed to dimension factors. This questions arguments that assessment centre overall dimension ratings should have construct-related validity
    corecore