27 research outputs found
Limitations in the Use of Achievement Tests as Measures of Educators' Productivity
Test-based accountability rests on the assumption that accountability for scores on tests will provide needed incentives for teachers to improve student performance. Evidence shows, however, that simple test-based accountability can generate perverse incentives and seriously inflated scores. This paper discusses the logic of achievement tests, issues that arise in using them as proxy indicators of educational quality, and the mechanism underlying the inflation of scores. It ends with suggestions, some speculative, for improving the incentives faced by teachers by modifying systems of student assessment and combining them with numerous other measures, many of which are more subjective than are test scores.
Research News And Comment: State Comparisons Using NAEP: Large Costs, Disappointing Benefits
Evaluating value-added models for teacher accountability
Library of Congress Cataloging-in-Publication Data Evaluating value-added models for teacher accountability / Dan McCaffrey... [et al.]. p. cm. “MG-158.” Includes bibliographical references
An Evaluation of the Robustness of the National Assessment of Educational Progress Trend Estimates for Racial-Ethnic Subgroups
Recommended from our members
Predicting Freshman Grade Point Average From College Admissions Test Scores and State High School Test Scores
The current focus on assessing “college and career readiness” raises an empirical question: How do high school tests compare with college admissions tests in predicting performance in college? We explored this using data from the City University of New York and public colleges in Kentucky. These two systems differ in the choice of college admissions test, the stakes for students on the high school test, and demographics. We predicted freshman grade point average (FGPA) from high school GPA and both college admissions and high school tests in mathematics and English. In both systems, the choice of tests had only trivial effects on the aggregate prediction of FGPA. Adding either test to an equation that included the other had only trivial effects on prediction. Although the findings suggest that the choice of test might advantage or disadvantage different students, it had no substantial effect on the over- and underprediction of FGPA for students classified by race-ethnicity or poverty