93 research outputs found

    What is Method Variance and How Can We Cope With It? A Panel Discussion

    Get PDF
    A panel of experts describes the nature of, and remedies for, method variance. In an attempt to help the reader understand the nature of method variance, the authors describe their experiences with method variance both on the giving and the receiving ends of the editorial review process, as well as their interpretation of other reviewers’ comments. They then describe methods of data analysis and research design, which have been used for detecting and eliminating the effects of method variance. Most methods have some utility, but none prevent the researcher from making faulty inferences. The authors conclude with suggestions for resolving disputes about method variance

    Nonlinear and Noncompensatory Processes in Performance Evaluation

    No full text
    Anecdotal experience suggests that such common judgment tasks as performance evaluation typically evoke nonlinear and noncompensatory information processing strategies. Yet, the simple linear model is typically used to model judges\u27 policies. Two performance evaluation studies using the policy capturing (“paper people”) paradigm are reported here. In the first study, nursing supervisors evaluated profiles of registered nurse and licensed vocational nurse performance. In the second study, faculty members evaluated profiles of nontenured faculty scholarly productivity. Conclusions drawn from these studies were (a) most judges appeared to use nonlinear judgment strategies, (b) for many judges, the nonlinearity was compatible with a noncompensatory judgment strategy, and (c) regression methods are capable of detecting nonlinearity in a series of judgments, at least in the performance evaluation context. Implications of these results for work in performance appraisal are discussed

    Regression and Discriminant Analysis for Analyzing Judgments

    No full text

    Implications of Empirical Bayes Meta-analysis for Test Validation

    No full text
    Empirical Bayes meta-analysis provides a useful framework for examining test validation. The fixed-effects case in which ρ has a single value corresponds to the inference that the situational specificity hypothesis can be rejected in a validity generalization study. A Bayesian analysis of such a case provides a simple and powerful test of ρ = 0; such a test has practical implications for significance testing in test validation. The random-effects case in which ς2 ρ  \u3e 0 provides an explicit method with which to assess the relative importance of local validity studies and previous meta-analyses. Simulated data are used to illustrate both cases. Results of published meta-analyses are used to show that local validation becomes increasingly important as ς2 ρ increases. The meaning of the term validity generalization is explored, and the problem of what can be inferred about test transportability in the random-effects case is described

    Job Analysis, Personnel Selection, and the ADA

    No full text
    The ADA will change the way employers screen and hire applicants. The notion of essential functions is central to hiring under the ADA. We explore the meaning of essential functions, including changes in perspective due to the ADA, how to conduct a job analysis to provide information in determining essential functions, and the role of essential functions in selection. We conclude by noting some challenges for job analysis under the ADA and directions for research

    Treating Uncertainty in Meta-Analytic Results

    No full text
    corecore