6 research outputs found

    How to Score Situational Judgment Tests: A Theoretical Approach and Empirical Test

    Get PDF
    The purpose of this dissertation is to examine how the method used to a score situational judgment test (SJT) affects the validity of the SJT both in the presence of other predictors and as a single predictor of task performance. To this end, I compared the summed score approach of scoring SJTs with item response theory and multivariate items response theory. Using two samples and three sets of analyses, I found that the method used to score SJTs influences the validity of the test and that IRT and MIRT show promise for increasing SJT validity. However, no individual scoring method produced the highest amount of validity across all sets of analyses. In line with previous research, SJTs added incremental validity in the presence of GMA and personality and, again, the method used to score the SJT affected the incremental validity. A relative weights analysis was performed for each scoring method across all the sets of analyses showing that, depending on the scoring method, SJT score may account for more criterion variance than either GMA or personality. However, it is likely that the samples were influenced by range restriction present in the incumbent samples

    Balancing the Teeter Totter: A Dialectical View of Managing Neurodiverse Employees

    No full text
    Effective management of autistic employees is a topic germane to the successful integration of individuals on the spectrum into the workplace but is a question that management researchers are only starting to broach. Unlike past research, we examine successful management for autistic employees without applying a priori leadership constructs traditionally found in the literature. Instead, we use a grounded approach to investigate how managers can effectively structure the day-to-day interactions they have with autistic employees. In doing so, we identify a dialectic between wanting to treat all employees as equal while understanding that different employees have very different needs. Based on this dialectic, we explore managerial behaviors associated with the poles and propose four different management types that lead to different outcomes for employees and organizations. Lastly, we build on aspects of identity negotiation to unpack how managers can balance the dialectic between different needs and wanting equal treatment

    Job attitudes: A meta-analytic review and an agenda for future research.

    No full text
    Given the importance and popularity of employee job attitudes in academics and practice (e.g., annual engagement surveys), it is crucial to explore and summarize previous developments in the literature to identify ways to advance the field. The current review takes a systematic approach to exploring the nomological network, including investigating redundancy, of seven common job attitudes. We present a portfolio of evidence relying on three primary studies and one meta‐analytic study (total k = 6631; total n = 3 309 205). Our results raise concerns about the measurement of select job attitudes. Further, job attitudes are moderately to strongly correlated with each other (most relations landing between ρ = .50 and .69) and have similar patterns of relationships with antecedents, correlates, and outcomes. Yet, relative weights analyses illustrate that some attitudes have more validity in predicting key employee outcomes than others, which points to theoretically relevant utility concerns among specific job attitudes. This review offers a contribution by synthesizing the literature and developing a future research agenda based on the current findings that will advance the field further. Finally, this work offers a primer on job attitudes, with definitions, applicable theoretical frameworks, scales and items, and empirical relationships between key constructs. (PsycInfo Database Record (c) 2022 APA, all rights reserved

    Same data, different conclusions : radical dispersion in empirical results when independent analysts operationalize and test the same hypothesis

    No full text
    In this crowdsourced initiative, independent analysts used the same dataset to test two hypotheses regarding the effects of scientists’ gender and professional status on verbosity during group meetings. Not only the analytic approach but also the operationalizations of key variables were left unconstrained and up to individual analysts. For instance, analysts could choose to operationalize status as job title, institutional ranking, citation counts, or some combination. To maximize transparency regarding the process by which analytic choices are made, the analysts used a platform we developed called DataExplained to justify both preferred and rejected analytic paths in real time. Analyses lacking sufficient detail, reproducible code, or with statistical errors were excluded, resulting in 29 analyses in the final sample. Researchers reported radically different analyses and dispersed empirical outcomes, in a number of cases obtaining significant effects in opposite directions for the same research question. A Boba multiverse analysis demonstrates that decisions about how to operationalize variables explain variability in outcomes above and beyond statistical choices (e.g., covariates). Subjective researcher decisions play a critical role in driving the reported empirical results, underscoring the need for open data, systematic robustness checks, and transparency regarding both analytic paths taken and not taken. Implications for organizations and leaders, whose decision making relies in part on scientific findings, consulting reports, and internal analyses by data scientists, are discussed

    Same data, different conclusions: Radical dispersion in empirical results when independent analysts operationalize and test the same hypothesis

    Get PDF
    In this crowdsourced initiative, independent analysts used the same dataset to test two hypotheses regarding the effects of scientists’ gender and professional status on verbosity during group meetings. Not only the analytic approach but also the operationalizations of key variables were left unconstrained and up to individual analysts. For instance, analysts could choose to operationalize status as job title, institutional ranking, citation counts, or some combination. To maximize transparency regarding the process by which analytic choices are made, the analysts used a platform we developed called DataExplained to justify both preferred and rejected analytic paths in real time. Analyses lacking sufficient detail, reproducible code, or with statistical errors were excluded, resulting in 29 analyses in the final sample. Researchers reported radically different analyses and dispersed empirical outcomes, in a number of cases obtaining significant effects in opposite directions for the same research question. A Boba multiverse analysis demonstrates that decisions about how to operationalize variables explain variability in outcomes above and beyond statistical choices (e.g., covariates). Subjective researcher decisions play a critical role in driving the reported empirical results, underscoring the need for open data, systematic robustness checks, and transparency regarding both analytic paths taken and not taken. Implications for organizations and leaders, whose decision making relies in part on scientific findings, consulting reports, and internal analyses by data scientists, are discussed
    corecore