27 research outputs found

    Developing an Explicit Instruction Special Education Teacher Observation Rubric

    Get PDF
    In this study, we developed an Explicit Instruction special education teacher observation rubric that details the elements of explicit instruction, and tested its psychometric properties using many-faceted Rasch measurement (MFRM). Video observations of classroom instruction from 30 special education teachers across three states were collected. External raters (n = 15) were trained to observe and evaluate instruction using the rubric, and assigned scores of ‘implemented’, ‘partially implemented’ or ‘not implemented’ for each of the items. Analyses showed that the item, teacher, lesson and rater facets achieved high psychometric quality for the instrument. Implications for research and practice are discussed

    Developing a Comprehensive Decoding Instruction Observation Protocol for Special Education Teachers

    No full text
    This study describes the development of a special education teacher observation protocol detailing the elements of effective decoding instruction. The psychometric properties of the protocol were investigated through many-facet Rasch measurement (MFRM). Video observations of classroom decoding instruction from 20 special education teachers across three states were collected. Twelve external raters were trained to observe and evaluate instruction using the protocol and assigned scores of “implemented,” “partially implemented,” or “not implemented” for each of the items. Analyses showed that the item, teacher, lesson, and rater facets achieved high levels of reliability. Teacher performance was consistent with what is reported in the literature. Implications for practice are discussed

    Developing a Comprehension Instruction Observation Rubric

    Get PDF
    In this study, we developed a Reading for Meaning special education teacher observation rubric that details the elements of evidence-based comprehension instruction and tested its psychometric properties using many-faceted Rasch measurement (MFRM). Video observations of classroom instruction from 10 special education teachers across three states during the 2015- 16 school year were collected. External raters (n=4) were trained to observe and evaluate instruction using the rubric, and assign scores of ‘implemented’, ‘partially implemented’ or ‘not implemented’ for each of the items. Analyses showed that the item, teacher, lesson and rater facets achieved high psychometric quality for the instrument. Teacher performance was consistent with what has been reported in the literature. Implications for research and practice are discussed

    Validity of a Special Education Teacher Observation System

    Get PDF
    This manuscript describes the comprehensive validation work undertaken to develop the Recognizing Effective Special Education Teachers (RESET) observation system, which was designed to provide evaluations of special education teachers’ ability to effectively implement evidence-based practices and to provide specific, actionable feedback to teachers on how to improve instruction. Following the guidance for developing effective educator evaluation systems, we employed the Evidence-Centered Design framework, articulated the claims and inferences to be made with RESET, and conducted a series of studies to collect evidence to evaluate its validity. Our efforts and results to date are described, and implications for practice and further research are discussed

    Using Evidence‐Centered Design to Create a Special Educator Observation System

    No full text
    The evidence‐centered design framework was used to create a special education teacher observation system, Recognizing Effective Special Education Teachers. Extensive reviews of research informed the domain analysis and modeling stages, and led to the conceptual framework in which effective special education teaching is operationalized as the ability to effectively implement evidence‐based practices for students with disabilities. In the assessment implementation stage, four raters evaluated 40 videos and provided evidence to support the scores assigned to teacher performances. An inductive approach was used to analyze the data and to create empirically derived, item‐level performance descriptors. In the assessment delivery stage, four different raters evaluated the same videos using the fully developed rubric. Many‐facet Rasch measurement analyses showed that the item, teacher, lesson, and rater facets achieved high psychometric quality. This process can be applied to other content areas to develop teacher observation systems that provide accurate evaluations and feedback to improve instructional practice

    Does Special Educator Effectiveness Vary Depending on the Observation Instrument Used?

    No full text
    In this study, we compared the results of 27 special education teachers’ evaluations using two different observation instruments, the Framework for Teaching (FFT), and the Explicit Instruction observation protocol of the Recognizing Effective Special Education Teachers (RESET) observation system. Results indicate differences in the rank-ordering of teachers depending on which instrument was used. Overall scores on RESET were higher on average than those on FFT. Item-level analyses showed that across 125 correlations, 73 were significant, low-moderate, and 52 were nonsignificant. Implications for research and practice are discussed

    Variance and Reliability in Special Educator Observation Rubrics

    Get PDF
    This study describes the development and initial psychometric evaluation of the Recognizing Effective Special Education Teachers (RESET) observation instrument. The study uses generalizability theory to compare two versions of a rubric, one with general descriptors of performance levels and one with item-specific descriptors of performance levels, for evaluating special education teacher implementation of explicit instruction. Eight raters (four for each version of the rubric) viewed and scored videos of explicit instruction in intervention settings. The data from each rubric were analyzed with a four facet, crossed, mixed-model design to estimate the variance components and reliability indices. Results show lower unwanted sources of variance and higher reliability indices with the rubric with item-specific descriptors of performance levels. Contributions to the fields of intervention and teacher evaluation are discussed

    Evaluating an Explicit Instruction Teacher Observation Protocol Through a Validity Argument Approach

    No full text
    In this study, we examined the scoring and generalizability assumptions of an explicit instruction (EI) special education teacher observation protocol using many-faceted Rasch measurement (MFRM). Video observations of classroom instruction from 48 special education teachers across four states were collected. External raters (n = 20) were trained to observe and evaluate instruction using the protocol. The results of this study suggest that the scoring rule is appropriate, in that the three-point scale allows for a meaningful way to differentiate various levels of quality of implementation of EI across teachers. Raters consistently scored easier items with higher scores, and more difficult items with a lower score. Additionally, the MFRM results for the rater facet suggest that raters can consistently apply the scoring criteria, and that there is limited rater bias impacting the scores. Implications for research and practice are discussed

    The Relationship of Special Education Teacher Performance on Observation Instruments with Student Outcomes

    No full text
    In this study, we examined the relationship of special education teachers’ performance on the Recognizing Effective Special Education Teachers (RESET) Explicit Instruction observation protocol with student growth on academic measures. Special education teachers provided video-recorded observations of three instructional lessons along with data from standardized, curriculum-based academic measures at the beginning, middle, and end of the school year for the students in the instructional group. Teachers’ lessons were evaluated by external, trained raters. Data were analyzed using many-faceted Rasch measurement (MFRM), correlation, and multiple regression. Teacher performance on the overall protocol did not account for statistically significant variance in student growth beyond that of students’ beginning of the year academic performance. Teacher performance on an abbreviated protocol comprised of items that had average or higher item difficulties on the MFRM analysis accounted for an additional 4.5% of variance beyond that of beginning of the year student performance. Implications for further research are discussed
    corecore