Stability of Item Parameters in Equating Items

Abstract

Thesis (Master's)--University of Washington, 2012The paper investigates the item factors that may cause item parameter instability. The primary concern of standardized tests is the accuracy of test score interpretation and the appropriateness of test score use across multiple tests. Equating is essential for any testing program. Equating refers to a statistical process used to adjust scores on test forms and establish comparability between alternate forms. Regarding the assumption of item parameter invariance, the statistical properties of the common items are stable across forms. Content and context effects on item parameter estimates appear most likely to violate the assumption will be discussed. Context effects, such as the item type, position adjacency to different kinds of items, wording or appearance and arrangement, as well as content effects, such as instructional and curricular emphasis, have been found to impact item parameter estimates. The data for this study came from the state-level Washington Assessment of Student Learning (WASL) tenth grade mathematics exams administered from 1999 to 2004. Item factors were labeled first. After labeling item characteristics, the process of test equating and the suspect item identifications was conducted. Two methods, Robust Z-statistics and the signed area between item characteristics curves, were used to detect items that demonstrate item parameter drift. The thesis presents the results of the analyses. Patterns regarding the features of unstable items are described and suggestions for future item development or for selection of anchor items are made

    Similar works