73 research outputs found
Book Review: Test Scoring Edited by David Thissen and Howard Wainer Mahwah, NJ: Lawrence Erlbaum, 2001, 422 pp., $99.95 (hardcover) ISBN 0-8058-3766-3
Generating items during testing: Psychometric issues and models
item generation, test design, IRT models, cognitive models, ability estimation, item uncertainty,
Item Response Modeling of Forced-Choice Questionnaires
Multidimensional forced-choice formats can significantly reduce the impact of numerous response biases typically associated with rating scales. However, if scored with classical methodology, these questionnaires produce ipsative data, which lead to distorted scale relationships and make comparisons between individuals problematic. This research demonstrates how item response theory (IRT) modeling may be applied to overcome these problems. A multidimensional IRT model based on Thurstone’s framework for comparative data is introduced, which is suitable for use with any forced-choice questionnaire composed of items fitting the dominance response model, with any number of measured traits, and any block sizes (i.e., pairs, triplets, quads, etc.). Thurstonian IRT models are normal ogive models with structured factor loadings, structured uniquenesses, and structured local dependencies. These models can be straightforwardly estimated using structural equation modeling (SEM) software Mplus. A number of simulation studies are performed to investigate how latent traits are recovered under various forced-choice designs and provide guidelines for optimal questionnaire design. An empirical application is given to illustrate how the model may be applied in practice. It is concluded that when the recommended design guidelines are met, scores estimated from forced-choice questionnaires with the proposed methodology reproduce the latent traits well
Measuring Procedural Knowledge in Problem Solving Environments with Item Response Theory
In this paper, a new data-driven model to measure procedural knowledge is described. The model is based on Item Response Theory. The main idea behind this new model is to establish an analogy between the testing and the problem solving environment. For this purpose, we model each problem (or exercise) solution path as a directed graph where nodes are states of the problem and edges, transitions between states (i.e. the actions accomplished by the student). We can match this model with testing by seeing each node as a question and each edge as choices within the questions
- …