5 research outputs found
Beyond traditional interviews:Psychometric analysis of asynchronous video interviews for personality and interview performance evaluation using machine learning
With the advent of new technology, traditional job interviews have been supplemented by asynchronous video interviews (AVIs). However, research on psychometric properties of AVIs is limited. In this study, 710 participants completed a mock AVI responding to eight personality questions (Extraversion, Conscientiousness). We collected self- and observer reports of personality, interview performance ratings, attractiveness, and AVI meta-information (e.g., professional attire, audio quality). Then, we automatically extracted the words, facial expressions, and voice characteristics from the videos and trained machine learning models to predict the personality traits and interview performance. Our algorithm explained substantially more variance in observer reports of Extraversion and Conscientiousness (average R2 = 0.32) and interview performance (R2 = 0.44), than self-reported Extraversion and Conscientiousness (average R2 = 0.12). Consistent with Trait Activation Theory, the explained variance in personality traits increased when participants responded to trait-relevant, compared to trait-irrelevant, questions. The test-retest reliability of our algorithm was somewhat stable over a time period of seven months, but lower than desired reliability standards in personnel selection. We examined potential sources of bias, including age, gender, and attractiveness, and found some instances of algorithmic bias (e.g., gender differences were often amplified in favor of women)
Can Large Language Models Assess Personality from Asynchronous Video Interviews?:A Comprehensive Evaluation of Validity, Reliability, Fairness, and Rating Patterns
The advent of Artificial Intelligence (AI) technologies has precipitated the rise of asynchronous video interviews (AVIs) as an alternative to conventional job interviews. These one-way video interviews are conducted online and can be analyzed using AI algorithms to automate and speed up the selection procedure. In particular, the swift advancement of Large Language Models (LLMs) has significantly decreased the cost and technical barrier to developing AI systems for automatic personality and interview performance evaluation. However, the generative and task-unspecific nature of LLMs might pose potential risks and biases when evaluating humans based on their AVI responses. In this study, we conducted a comprehensive evaluation of the validity, reliability, fairness, and rating patterns of two widely-used LLMs, GPT-3.5 and GPT-4, in assessing personality and interview performance from an AVI. We compared the personality and interview performance ratings of the LLMs with the ratings from a task-specific AI model and human annotators using simulated AVI responses of 685 participants. The results show that LLMs can achieve similar or even better zero-shot validity compared with the task-specific AI model when predicting personality traits. The verbal explanations for predicting personality traits generated by LLMs are interpretable by the personality items that are designed according to psychological theories. However, LLMs also suffered from uneven performance across different traits, insufficient test-retest reliability, and the emergence of certain biases. Thus, it is necessary to exercise caution when applying LLMs for human-related application scenarios, especially for significant decisions such as employment.</p