Asking the Right Questions: Increasing Fairness and Accuracy of Personality Assessments with Computerised Adaptive Testing

Abstract

Personality assessments are frequently used in real-life applications to predict important outcomes. For such assessments, the forced choice (FC) response format has been shown to reduce response biases and distortions, and computerised adaptive testing (CAT) has been shown to improve measurement efficiency. This research developed FC CAT methodologies under the framework of the Thurstonian item response theory (TIRT) model. It is structured into a logical sequence of three areas of investigation, where the findings from each area inform key decisions in the next one. First, the feasibility of FC CAT is tested empirically. Analysis of large historical samples provides support for item parameter invariance when an item appears in different FC blocks, with person score estimation remaining very stable despite minor violations. Remedies for minimising the risk of assumption violations are also developed. Second, the design of the FC CAT algorithm is optimised. Current CAT methodologies are reviewed and adapted for TIRT-based FC assessments, and intensive simulation studies condense the design options to a small number of practical recommendations. Third, the practicality and usefulness of FC CAT is examined. An adaptive FC assessment measuring the HEXACO model of personality is developed and trialled empirically. In conclusion, this research mapped out a blueprint for developing FC CAT that use the TIRT model, highlighting the benefits, limitations, and key directions for further research

    Similar works