An Argument-Based Approach to Early Literacy Curriculum-Based Measure Validation Within Multi-Tiered Systems of Support in Reading: Does Instructional Effectiveness Matter?
Early literacy curriculum-based measures (CBMs) are widely used as universal screeners within multi-tiered systems of support in reading (MTSS-R) for (1) evaluating the overall effectiveness of the reading system and (2) assigning students to supplemental and intensive interventions. Evidence supporting CBM validity for these purposes have primarily relied on diagnostic accuracy statistics obtained from evaluations of CBMs’ discriminative (i.e., sensitivity and specificity) and predictive (i.e., likelihood ratios, posttest probabilities) ability across various lag times and instructional contexts. The treatment paradox has been identified as a potential source of bias which may systematically alter diagnostic accuracy statistics when there is substantial lag time between administrations of the screener and outcome measure within medical diagnostic accuracy studies, particularly for conditions that lie on a continuum such as reading difficulties. However, the impact of the treatment paradox on early literacy screener diagnostic accuracy statistics in the context of MTSS-R is unknown. The current study examines the degree to which the treatment paradox, in the form of reading instruction, alters the diagnostic accuracy of a nonsense word fluency screener across different lag times. Concurrent and predictive validity coefficients and diagnostic accuracy statistics are examined within the context of a randomized controlled trial for meaningful differences across time points, lag times and levels of instructional effectiveness across two different outcome measures