Measurement specialists routinely assume examinee responses to test items are independent of one another. However, previous research has shown that many contemporary tests contain item dependencies, and not accounting for these dependencies leads to misleading estimates of item, test, and ability parameters. In this study, we (a) review methods for detecting local item dependence (LID), (b) discuss the use of testlets to account for LID in context-dependent item sets, (c) apply LID detection methods and testlet-based item calibrations to data from a large-scale, high stakes admissions test, and (d) evaluate the results with respect to test score reliability and examinee proficiency estimation. The results suggest the presence of LID impacts estimation of examinee proficiency. The practical effects of the presence of LID on passage-based tests are discussed, as are issues regarding how to calibrate context-dependent item sets using item response theory
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.