60 research outputs found

    Optimizing the efficiency: adverse impact trade-off in personnel classification decisions

    Get PDF
    Different subgroups display different means on specific performance predictors, leading to the quality- diversity dilemma in the personnel selection context. However, since classification situations still arise in practice, the reality of effect sizes will lead to adverse impact in these personnel decision situations as well. The current method to estimate the classification efficiency given a set of predictors, different subgroups and their characteristics, was extended to yield the adverse impact ratio as well. Additionally, this method was implemented in an algorithm that leads to predictor weights that result in optimal trade-offs between efficiency and diversity

    Efficiency and adverse impact of general classification decisions

    Get PDF
    Classification decisions relate to situations in which a battery of predictors is used to assign individuals to a number of different trajectories. De Corte (2000) proposed a method to estimate the classification efficiency in case the assignment of individuals to trajectories is based on least square criterion estimates. The current paper extends this method to the case where the applicants come from several subpopulations and estimates are no longer only regression weighted. The extension is motivated by the fact that using other than regression based criterion estimates for assigning applicants to the different trajectories may result in classification decisions that show substantially less adverse impact as compared to classifications in which regression based criterion estimates govern the allocation process (De Corte, Lievens & Sackett, 2007). An application of the new analytic method indicates that while classifications based on regression weighted criterion estimates lead to optimal classification efficiency, they also yield substantial adverse impact because many of the most valid predictors, and cognitive ability predictors in particular, show large effect sizes in favor of the so-called majority applicants. Alternatively, general (non regression based) classification decisions lead to a wide range of possible trade-offs between efficiency and diversity where concessions in terms of classification efficiency are compensated by more advantageous levels of adverse impact. The proposed method may be used by practitioners to alleviate the quandary between efficiency and adverse impact in a classification context

    Robustness, sensitivity and sampling variability of Pareto-optimal selection system solutions to address the quality-diversity trade-off

    Get PDF
    In case that both the goals of selection quality and diversity are important, a selection system is Pareto-optimal (PO) when its implementation is expected to result in an optimal balance between the levels achieved with respect to both these goals. The study addresses the critical issue whether PO systems, as computed from calibration conditions, continue to perform well when applied to a large variety of different validation selection situations. To address the key issue, we introduce two new measures for gauging the achievement of these designs and conduct a large simulation study in which we manipulate 10 factors (related to the selection situation, sensitivity/robustness, and the selection system) that cumulate in a design with 3,888 cells and 24 selection systems. Results demonstrate that PO systems are superior to other, non-PO systems (including unit weighed system designs) both in terms of the achievement measures as well as in terms of yielding more often a better quality/diversity trade-off. The study also identifies a number of conditions that favor the achievement of PO systems in realistic selection situations

    Validity and adverse impact potential of predictor composite formation

    Get PDF
    Previous research on the validity and adverse impact (AI) of predictor composite formation focused on the merits of regression-based or ad hoc composites. We argue for a broader focus. Ad hoc chosen composites are usually not Pareto-optimal, whereas the regression-based composite represents only one element from the total set of Pareto-optimal composites and can, therefore, provide only limited information on the potential for validity and AI reduction of forming predictor composites when both validity and AI are of concern. In that case, other Pareto-optimal composites may provide a better benchmark to decide on the merits of the predictor composite formation. We summarize a method to determine the set of Pareto-optimal composites and apply the method to a representative collection of selection predictors. The application shows that the assessment of the AI and validity of predictor composite formation can differ substantially from the one arrived at when considering only regression-based composites. 1
    corecore