5 research outputs found

    Assessment of the Mate Retention Inventory-Short Form Using Item Response Theory

    Get PDF
    The mate retention inventory (MRI) has been a valuable tool in the field of evolutionary psychology for the past 30 years. The goal of the current research is to subject the MRI to rigorous psychometric analysis using item response theory to answer three broad questions. Do the individual items of the MRI fit the scale well? Does the overall function of the MRI match what is predicted? Finally, do men and women respond similarly to the MRI? Using a graded response model, it was found that all but two of the items fit acceptable model patterns. Test information function analysis found that the scale acceptably captures individual differences for participants with a high degree of mate retention but the scale is lacking in capturing information from participants with a low degree of mate retention. Finally, discriminate item function analysis reveals that the MRI is better at assessing male than female participants, indicating that the scale may not be the best indicator of female behavior in a relationship. Overall, we conclude that the MRI is a good scale, especially for assessing male behavior, but it could be improved for assessing female behavior and individuals lower on overall mate retention behavior. It is suggested that this paper be used as a framework for how the newest psychometrics techniques can be applied in order to create more robust and valid measures in the field of evolutionary psychology

    A Psychometric Assessment of OCB: Clarifying the Distinction Between OCB and CWB and Developing a Revised OCB Measure

    Get PDF
    © 2019, Springer Science+Business Media, LLC, part of Springer Nature. This study was performed to (1) assess the appropriateness of using negatively worded items in organizational citizenship behavior (OCB) scales, (2) psychometrically demonstrate the construct distinctness of OCB and counterproductive work behavior (CWB), and (3) report on a revised, short-form OCB scale. Leveraging classical test theory (CTT) and item response theory (IRT), we demonstrate that the negatively worded items from a popular OCB scale (Williams and Anderson 1991) do not measure OCB, but rather a unique construct (CWB). CTT analyses (factor analyses) indicate that the negatively worded items load onto a unique factor when the scale is analyzed on its own and load onto a CWB factor when the scale is analyzed with a CWB scale. Additionally, IRT analyses indicate that the negatively worded items exhibit lower discrimination parameters and higher levels of local independence than the positively worded items, and similar discrimination parameters and levels of local independence as the CWB items. In turn, IRT analyses were used to identify the best items from the OCB scale to create a revised, short-form scale. The short-form scale showed comparable or improved convergent and discriminant validity and internal consistency reliability, as well as similar patterns of psychometric information yielded from IRT analyses, compared to the original scale. In short, the revised measure better aligns with conceptual definitions of OCB, demonstrates acceptable psychometric characteristics, and, given its reduced length, is of more practical value to researchers wishing to assess this construct within different types of research designs (e.g., longitudinal, multi-source)

    An Introduction to Psychological Statistics

    Get PDF
    This work has been superseded by Introduction to Statistics in the Psychological Sciences available from https://irl.umsl.edu/oer/25/. - We are constantly bombarded by information, and finding a way to filter that information in an objective way is crucial to surviving this onslaught with your sanity intact. This is what statistics, and logic we use in it, enables us to do. Through the lens of statistics, we learn to find the signal hidden in the noise when it is there and to know when an apparent trend or pattern is really just randomness. The study of statistics involves math and relies upon calculations of numbers. But it also relies heavily on how the numbers are chosen and how the statistics are interpreted. This work was created as part of the University of Missouri’s Affordable and Open Access Educational Resources Initiative (https://www.umsystem.edu/ums/aa/oer). The contents of this work have been adapted from the following Open Access Resources: Online Statistics Education: A Multimedia Course of Study (http://onlinestatbook.com/). Project Leader: David M. Lane, Rice University. Changes to the original works were made by Dr. Garett C. Foster in the Department of Psychological Sciences to tailor the text to fit the needs of the introductory statistics course for psychology majors at the University of Missouri – St. Louis. Materials from the original sources have been combined, reorganized, and added to by the current author, and any conceptual, mathematical, or typographical errors are the responsibility of the current author

    Faking is a FACT: Examining the Susceptibility of Intermediate Items to Misrepresentation

    No full text
    As personality assessment continues to become more common in business settings, the need to understand and address faking and misrepresentation of personality in selection processes becomes extremely important. The recent advances in ideal point item response theory offer a new and more nuanced way to create personality inventories and to investigate the psychology of faking. The present study uses a within-subjects experiment to investigate how intermediate items, specifically those of the FACT taxonomy, behave under honest and faked response conditions. The effects of faking on item characteristics and respondent scores are assessed, and a technique for identifying faked responses is demonstrated
    corecore