16 research outputs found

    Maximizing Small Group Reading Instruction

    Get PDF
    In this article, the authors revisit the common practice of small-group reading instruction. They challenge the idea of grouping readers based on text levels and instead review supplemental intervention group research that suggests targeted skill practice as a more optimal use of time in small groups. They then present the ABCs—a focus on assessment, basics & books, and clarity in communication—as the central principles that should guide how we instruct reading in small groups

    Improving literacy in key stage 2

    Get PDF

    Quality of tier 1 instruction in an integrated multi-tiered system of support: a mixed methods study

    Get PDF
    To effectively and efficiently address both the academic and behavioral needs of all students, integrated Multi-Tiered Systems of Support (iMTSS) is an initiative gaining strength in elementary schools across the U.S. Tier 1 instruction within an iMTSS, should be evidence-based and differentiated to provide high quality educational opportunities to all students. One established approach to providing accessible and differentiated instruction is Universal Design for Learning (UDL), an instructional planning framework that can be embedded within a tiered prevention system. A mixed methods study was conducted to learn about the state of concurrent implementation of iMTSS and UDL within Tier 1 instruction in elementary schools. Participating schools were found to be either implementing the two initiatives concurrently in Tier 1 or not implementing UDL at all. Follow-up inquiry found additional qualitative characteristics that differentiated these two groups and barriers to implementation were identified.Includes bibliographical references

    Predictability of Curriculum-based Reading Measures for Statewide Test Performance

    Get PDF
    National legislation has led to an increasing need for school districts to demonstrate student reading progress using performance on statewide achievement tests as indicators of growth. This study added to previous research on the effectiveness of curriculum-based measurement (CBM) in predicting success on statewide reading achievement tests and determining whether a student is at-risk for poor performance on statewide tests. The current study analyzed the relationship between a CBM tool for assessing reading progress, the Dynamic Indicators of Basic Early Literacy Skills (DIBELS), and a statewide reading assessment, the Pennsylvania System of School Assessment (PSSA). This study compared the predictive efficiency of three components of the DIBELS, Oral Reading Fluency (ORF), Daze, and Total, for student performance on the PSSA. The study analyzed scores of 75 participants across and within Grade 4 and Grade 5. No significant differences were found between ORF, Daze, and Total scores or between fall and spring DIBELS administrations. Results indicate that ORF, Daze, and Total categories are similar predictors of student statewide test performance and that DIBELS Total category is not more predictive than individual DIBELS measures. Results also suggest that DIBELS is a valuable tool for school districts to monitor student reading progress

    IMPROVING OUTCOMES FOR OLDER STRUGGLING READERS IN SPECIAL EDUCATION RESOURCE

    Get PDF
    This research aimed to increase the reading achievement of sixth-grade special education students receiving interventions in a resource setting. This problem of practice was created in response to the theory of change made by a Network Improvement Community (NIC) rooted in the lack of professional development offered by the school district for special education teachers in teaching students foundational reading skills. The research questions used for this study this evaluation were 1) What percent of students made progress in each reporting category (word study, grammar, and comprehension) in the Lexia PowerUp literacy program? 2) To what extent did the teachers feel they implemented the Lexia PowerUp literacy program with fidelity? 3) To what extent did the implementation of Lexia PowerUp literacy increase student performance on the STAAR test? Specifically, what was the percentage increase for approaches and meets grade level? This research project also applied improvement science principles and mixed methods using an embedded experimental design. Throughout this research project, the researcher implemented interventions and evaluated outcomes as part of a plan-do-study-act (PDSA) inquiry cycle. The PDSA cycle was conducted in two phases with pre- and post-measures in addition to collecting quantitative and qualitative data. The quantitative data was collected through progress measures in each strand of instruction in the Lexia PowerUp literacy program for each campus and by reviewing the sixth grade State of Texas Assessment of Academic Readiness (STAAR) data in the areas of approaches, meets, and percent of students who made progress on the reading assessment. Qualitative data was collected by administering a teacher survey to determine the fidelity of implementing the intervention program. Findings indicated that students made progress in all three areas of the intervention program, and there was overall incremental growth on the STAAR reading assessment. Teachers self-reported implementation with fidelity, but there were minimal responses to the survey. Additionally, the school district limited the researcher in accessing available data

    Investigating the Relationship Between Perceptions of a “Good Reader” and Reading Performance Among Elementary and Middle School Students: An Exploration Study

    Get PDF
    The purpose of the study was to examine the relationships between student perceptions of a “good reader” and their reading performance. The study employed a causal-comparative and correlational design. Participants, elementary and middle school (grades 1-8) students (N= 100) attending an after-school program in the Southeastern United States, were administered the Student Perceptions of a Good Reader Scale (SPGRS) which includes two subscales: Perceptions-Decoding Efficiency (PerDE), and Perceptions-Comprehension (PerC). Additional measures included Measures of Academic Progress (MAP) Growth Reading (MAP Growth, 2020) to determine reading comprehension and a curriculum-based measure of oral reading fluency (ORF) which determines words read correctly per minute (WCPM). Results from this study expand the research base in several ways; the 16-item quantitative SPGRS was developed and validated to assess student perceptions of a good reader. Reliability statistics yielded a Cronbach’s alpha of .83 for the overall scale, .79 for the PerDE subscale, and .77 for the PerC subscale. Principal components analysis findings support the two separate subscales (i.e., all factor loadings above .35). Results indicate significant differences between perceptions for both “skilled” (above the 25th percentile) readers versus “unskilled” (at or below the 25th percentile) as determined by their MAP reading comprehension scores, based on 2015 national norms (Thum & Hauser, 2015). Participants’ scores on the PerC subscale were higher than on the PerDE subscale for both groups, indicating that skilled and unskilled readers perceive that behaviors related to reading comprehension are more important than behaviors related to efficiently decoding words in defining a good reader. Regression analyses reveal that both types of perceptions (decoding efficiency and comprehension) are significantly related to reading comprehension for upper elementary and middle school students. However, participants’ reading proficiency (as defined by both ORF and reading comprehension) did not significantly predict their perceptions of a good reader. Despite some reading experts’ concerns that an emphasis on reading fluency, particularly in elementary and middle school, may negatively impact children’s views of reading, children in this sample associated behaviors with reading comprehension as more highly indicative of a good reader

    An Argument-Based Approach to Early Literacy Curriculum-Based Measure Validation Within Multi-Tiered Systems of Support in Reading: Does Instructional Effectiveness Matter?

    Get PDF
    Early literacy curriculum-based measures (CBMs) are widely used as universal screeners within multi-tiered systems of support in reading (MTSS-R) for (1) evaluating the overall effectiveness of the reading system and (2) assigning students to supplemental and intensive interventions. Evidence supporting CBM validity for these purposes have primarily relied on diagnostic accuracy statistics obtained from evaluations of CBMs’ discriminative (i.e., sensitivity and specificity) and predictive (i.e., likelihood ratios, posttest probabilities) ability across various lag times and instructional contexts. The treatment paradox has been identified as a potential source of bias which may systematically alter diagnostic accuracy statistics when there is substantial lag time between administrations of the screener and outcome measure within medical diagnostic accuracy studies, particularly for conditions that lie on a continuum such as reading difficulties. However, the impact of the treatment paradox on early literacy screener diagnostic accuracy statistics in the context of MTSS-R is unknown. The current study examines the degree to which the treatment paradox, in the form of reading instruction, alters the diagnostic accuracy of a nonsense word fluency screener across different lag times. Concurrent and predictive validity coefficients and diagnostic accuracy statistics are examined within the context of a randomized controlled trial for meaningful differences across time points, lag times and levels of instructional effectiveness across two different outcome measures

    TUTORING AND VISION: EXAMINING TWO INTERVENTIONS TO IMPROVE SECONDARY STUDENT LITERACY

    Get PDF
    Literacy, one of the most foundational skills supporting learning progress, grows in importance as students enter adolescence, with secondary school courses and materials increasing in complexity. However, literacy skills are informed by many factors that predate student arrival in the middle school classroom, including disparities in access to quality instruction and socioeconomic factors including physical environments, psychosocial health, and physical health. These multidimensional determinants of learning require interventions targeting different factors in order to move the needle of student learning. Given the importance of program evaluation for continuous improvement of evidence-based interventions, this dissertation examines the impact of two different interventions intended to improve student reading. In the first two sets of research questions, this dissertation explores the impact of a reading tutoring program on middle school student reading achievement, as well as the reading self-efficacy perceptions of striving readers after participation in the program. While the impact on reading achievement was inconclusive, students in the program reported strong reading self-efficacy perceptions, comparable to average readers. In the final set of research questions, the impact of a school-based vision program providing eyeglasses to students is considered, exploring the impact on eyeglasses use and the relationship between treatment, eyeglasses use, and reading achievement. While no significant impact on eyeglasses use was found, evidence of the impact of eyeglasses use on reading achievement was noted in an exploratory analysis. This dissertation points to the importance of program evaluation of real-world interventions, allowing for continuous improvement to better ensure effective and replicable support for student reading skills

    Intervenciones basadas en la evidencia y alfabetizaciĂłn temprana

    Get PDF
    Los predictores de la alfabetización temprana han sido ampliamente estudiados en ortografías opacas. No obstante, se discute si los hallazgos son transferibles a ortografías trasparentes como el español. En particular, hay controversia acerca de si la conciencia fonológica (CF) podría ser un predictor menos robusto o mås limitado en el tiempo. Nuestro estudio tuvo por objetivo analizar la contribución de diferentes tareas de conciencia fonológica, en 104 participantes hispanoparlantes y normolectores (edad= 6 años, 4 meses) de NSE vulnerable, luego de controlar la incidencia del lenguaje oral (LO) y la ortografía (Ort). Asimismo, nos propusimos investigar si estos predictores se mantenían un año después de iniciada la alfabetización. Mediante modelos de regresión jerårquica, obtuvimos resultados que indican que LO y CF contribuyen a la alfabetización inicial del reconocimiento de palabras, pero dejan de ser predictores una vez transcurrido un año de instrucción escolar. Por otra parte, tareas de rimas no son predictoras de la alfabetización, ni siquiera en la etapa inicial

    Weaving the literature on integration, pedagogy and assessment: insights for curriculum and classrooms. Report 2.

    Get PDF
    Readers should bear the following in mind: ● This is the second of two reports commissioned by the National Council for Curriculum and Assessment to inform the ongoing development of the Primary Curriculum Framework. Report 1 addresses conceptualisations of curriculum integration. This second report addresses the literature on pedagogy and assessment. Annex 2 contains the relevant methodological information for this report. ● A timeline for the development of this report can be seen on p. 2 of Report 1. ● This report is one of several commissioned by NCCA in 2022. We encourage readers to consult the reports on specific curriculum areas available on the NCCA website (e.g. Nohilly et al., 2023). We do not attempt to detail pedagogical or assessment advice for specific disciplines/subjects in this report. ● This report draws extensively on meta-analytic reviews. The box below provides guidance for readers on interpreting the effect sizes reported in such reviews. Understanding Effect Sizes To establish the efficacy of a particular practice, it is common to use experimental research designs. This usually involves one randomised group of children being taught using the practice of interest (e.g. Group 1= a new teaching strategy) and a comparison group (Group 2= traditional teaching strategy). The performance of each group is measured and an average score is calculated. The scores of each group of students are then statistically compared to see if there is meaningful difference. The effect size (ES) indicates the scale of this difference (if it exists). Effect size can be calculated in many ways, e.g. Cohen’s d, Hedge’s g. They can also be aggregated for the purpose of meta-analytic reviews. In educational research, an effect size of less than 0.05 is considered small, 0.05 to less than 0.20 is medium, and 0.20 or greater is large (Kraft, 2020). Different benchmarks exist, but in general, the larger the effect size the greater the impact on student learning. In this report, we use the original author’s descriptors, e.g. if an author classified the effect of their intervention as ‘medium’, we report as such
    corecore