Getting More From Your Maze: Examining Differences in Distractors

Abstract

The present study examined the technical adequacy of maze-selection tasks constructed in 2 different ways: typical versus novel. We selected distractors for each measure systematically based on rules related to the content of the passage and the part of speech of the correct choice. Participants included 262 middle school students who were randomly assigned to 1 of the 2 maze formats. Scoring of the maze included both correct and correct-minus-incorrect scores. Students completed 3 criterion-reading tests: the Scholastic Reading Inventory, the AIMSweb R-Maze, and a high-stakes state assessment (the Missouri Assessment Program). Alternate-forms reliability was similar across maze formats; however, with regard to scoring procedure, reliability coefficients were consistently higher for correct than for correct-minus-incorrect scores. Validity coefficients were also similar across format with 1 exception: The coefficients for typical maze scores were stronger when compared with the Missouri Assessment Program scores than the coefficients for novel maze scores

    Similar works