103,355 research outputs found

    Examining Spillover Effects from Teach For America Corps Members in Miami-Dade County Public Schools

    Get PDF
    Despite a large body of evidence documenting the effectiveness of Teach For America (TFA) corps members at raising the math test scores of their students, little is known about the program's impact at the school level. TFA's recent placement strategy in the Miami-Dade County Public Schools (M-DCPS), where large numbers of TFA corps members are placed as clusters into a targeted set of disadvantaged schools, provides an opportunity to evaluate the impact of the TFA program on broader school performance. This study examines whether the influx of TFA corps members led to a spillover effect on other teachers' performance. We find that many of the schools chosen to participate in the cluster strategy experienced large subsequent gains in math achievement. These gains were driven in part by the composition effect of having larger numbers of effective TFA corps members. However, we do not find any evidence that the clustering strategy led to any spillover effect on school-wide performance. In other words, our estimates suggest that extra student gains for TFA corps members under the clustering strategy would be equivalent to the gains that would result from an alternate placement strategy where corps members were evenly distributed across schools

    Hierarchical Subquery Evaluation for Active Learning on a Graph

    Get PDF
    To train good supervised and semi-supervised object classifiers, it is critical that we not waste the time of the human experts who are providing the training labels. Existing active learning strategies can have uneven performance, being efficient on some datasets but wasteful on others, or inconsistent just between runs on the same dataset. We propose perplexity based graph construction and a new hierarchical subquery evaluation algorithm to combat this variability, and to release the potential of Expected Error Reduction. Under some specific circumstances, Expected Error Reduction has been one of the strongest-performing informativeness criteria for active learning. Until now, it has also been prohibitively costly to compute for sizeable datasets. We demonstrate our highly practical algorithm, comparing it to other active learning measures on classification datasets that vary in sparsity, dimensionality, and size. Our algorithm is consistent over multiple runs and achieves high accuracy, while querying the human expert for labels at a frequency that matches their desired time budget.Comment: CVPR 201
    • …
    corecore