4,196 research outputs found

    TCAP Assessment in Correlation with and as Compared by STAR Assessment

    Get PDF
    The purpose of the study was twofold. The first purpose of the study was to determine if a correlation existed between the Standardized Test for the Assessment of Reading (STAR), created and distributed by Renaissance, and the Tennessee Comprehensive Assessment Program (TCAP) Achievement Test in Math and Reading for grade 3, grade 4, and grade 5. The second purpose of this study was to evaluate the relationship between the percentile category of the STAR test and the TCAP test. The factor variable, identified as the percentile category, included three levels: Urgent Intervention, Intervention, and At/Beyond Benchmark. The dependent variable was the TCAP score. The study included 3rd-grade, 4th-grade, and 5th-grade students during the 2016-2017 school year who had taken the STAR reading and STAR math assessments and had taken the TCAP reading and TCAP math assessment. Based on the findings of this study, a strong correlational relationship does exist between the STAR and TCAP assessments. Overall, the strong correlation between the STAR and the TCAP were consistent across Math and Reading in 3rd, 4th, and 5th grades. Since the ANOVA was significant, a post hoc multiple comparisons was conducted to evaluate pairwise difference among the means of the three groups. Overall, the At/Beyond Benchmark group was significantly higher than both the Urgent Intervention group and the Intervention group in Math and Reading for 3rd grade, 4th grade, and 5th grade. There was not a significant difference between the Urgent Intervention group and the Intervention group, the exception was 5th grade math

    Psychometrics in Practice at RCEC

    Get PDF
    A broad range of topics is dealt with in this volume: from combining the psychometric generalizability and item response theories to the ideas for an integrated formative use of data-driven decision making, assessment for learning and diagnostic testing. A number of chapters pay attention to computerized (adaptive) and classification testing. Other chapters treat the quality of testing in a general sense, but for topics like maintaining standards or the testing of writing ability, the quality of testing is dealt with more specifically.\ud All authors are connected to RCEC as researchers. They present one of their current research topics and provide some insight into the focus of RCEC. The selection of the topics and the editing intends that the book should be of special interest to educational researchers, psychometricians and practitioners in educational assessment

    The Road Ahead for State Assessments

    Get PDF
    The adoption of the Common Core State Standards offers an opportunity to make significant improvements to the large-scale statewide student assessments that exist today, and the two US DOE-funded assessment consortia -- the Partnership for the Assessment of Readiness for College and Careers (PARCC) and the SMARTER Balanced Assessment Consortium (SBAC) -- are making big strides forward. But to take full advantage of this opportunity the states must focus squarely on making assessments both fair and accurate.A new report commissioned by the Rennie Center for Education Research & Policy and Policy Analysis for California Education (PACE), The Road Ahead for State Assessments, offers a blueprint for strengthening assessment policy, pointing out how new technologies are opening up new possibilities for fairer, more accurate evaluations of what students know and are able to do. Not all of the promises can yet be delivered, but the report provides a clear set of assessment-policy recommendations. The Road Ahead for State Assessments includes three papers on assessment policy.The first, by Mark Reckase of Michigan State University, provides an overview of computer adaptive assessment. Computer adaptive assessment is an established technology that offers detailed information on where students are on a learning continuum rather than a summary judgment about whether or not they have reached an arbitrary standard of "proficiency" or "readiness." Computer adaptivity will support the fair and accurate assessment of English learners (ELs) and lead to a serious engagement with the multiple dimensions of "readiness" for college and careers.The second and third papers give specific attention to two areas in which we know that current assessments are inadequate: assessments in science and assessments for English learners.In science, paper-and-pencil, multiple choice tests provide only weak and superficial information about students' knowledge and skills -- most specifically about their abilities to think scientifically and actually do science. In their paper, Chris Dede and Jody Clarke-Midura of Harvard University illustrate the potential for richer, more authentic assessments of students' scientific understanding with a case study of a virtual performance assessment now under development at Harvard. With regard to English learners, administering tests in English to students who are learning the language, or to speakers of non-standard dialects, inevitably confounds students' content knowledge with their fluency in Standard English, to the detriment of many students. In his paper, Robert Linquanti of WestEd reviews key problems in the assessment of ELs, and identifies the essential features of an assessment system equipped to provide fair and accurate measures of their academic performance.The report's contributors offer deeply informed recommendations for assessment policy, but three are especially urgent.Build a system that ensures continued development and increased reliance on computer adaptive testing. Computer adaptive assessment provides the essential foundation for a system that can produce fair and accurate measurement of English learners' knowledge and of all students' knowledge and skills in science and other subjects. Developing computer adaptive assessments is a necessary intermediate step toward a system that makes assessment more authentic by tightly linking its tasks and instructional activities and ultimately embedding assessment in instruction. It is vital for both consortia to keep these goals in mind, even in light of current technological and resource constraints.Integrate the development of new assessments with assessments of English language proficiency (ELP). The next generation of ELP assessments should take into consideration an English learners' specific level of proficiency in English. They will need to be based on ELP standards that sufficiently specify the target academic language competencies that English learners need to progress in and gain mastery of the Common Core Standards. One of the report's authors, Robert Linquanti, states: "Acknowledging and overcoming the challenges involved in fairly and accurately assessing ELs is integral and not peripheral to the task of developing an assessment system that serves all students well. Treating the assessment of ELs as a separate problem -- or, worse yet, as one that can be left for later -- calls into question the basic legitimacy of assessment systems that drive high-stakes decisions about students, teachers, and schools." Include virtual performance assessments as part of comprehensive state assessment systems. Virtual performance assessments have considerable promise for measuring students' inquiry and problem-solving skills in science and in other subject areas, because authentic assessment can be closely tied to or even embedded in instruction. The simulation of authentic practices in settings similar to the real world opens the way to assessment of students' deeper learning and their mastery of 21st century skills across the curriculum. We are just setting out on the road toward assessments that ensure fair and accurate measurement of performance for all students, and support for sustained improvements in teaching and learning. Developing assessments that realize these goals will take time, resources and long-term policy commitment. PARCC and SBAC are taking the essential first steps down a long road, and new technologies have begun to illuminate what's possible. This report seeks to keep policymakers' attention focused on the road ahead, to ensure that the choices they make now move us further toward the goal of college and career success for all students. This publication was released at an event on May 16, 2011

    Abstracting common errors in the learning of time intervals via cognitive diagnostic assessment / Phei-Ling Tan, Liew-Kee Kor and Chap-Sam Lim

    Get PDF
    This study engaged the cognitive diagnostic assessment (CDA) to abstract the common errors in the learning of time intervals based on pupils’ knowledge states. CDA is a feasible testing tool that can inform us where a test taker may be prone to making errors in the tests. In this study, a cognitive diagnostic model with six attributes and 12 test items was created to evaluate pupils’ performance in a diagnostic test on “duration of two inclusive dates”. A total of 269 primary six pupils from 11 elementary schools participated in the study. The diagnostic test scores were analyzed using the Artificial Neural Network which generated 12 knowledge states (KS). Result shows that “100000” was the leading KS. The common errors associated with this KS in hierarchical order of prominence were: (i) exclude starting date as a day in duration; (ii) error in regrouping; (iii) compute incorrectly the sum of the two given dates; and, (iv) express incorrectly the time measurement in months and days. These identified common errors would provide a valuable basis for remedial teaching of the topic “Time”. It also allows mathematics teachers to identify the inadequacy of an earlier teaching strategy and to engender an improved approach to help struggling learners shore up their basic skills

    Improving Remedial Middle School Standardized Test Scores

    Get PDF
    The purpose of this applied study was to solve the problem of low standardized test scores in a remedial class for a middle school in southern Virginia and to formulate a solution to address the problem. The central research question that data collection attempted to answer was: How can the problem of low standardized test scores in a remedial math class be solved in a middle school in southern Virginia? Data were collected in three ways. First, interviews of teachers and administrators of the remedial math class, called Math Lab, were conducted. These interviews were transcribed and coded, with the codes collected into themes and then displayed visually. Second, an online discussion board was conducted with current and former teachers of Math Lab, school administrators, and classroom math teachers. Third, surveys of teachers and administrators with knowledge of Math Lab and how it impacted students were completed. The quantitative surveys were analyzed by finding descriptive statistics of the data. After reviewing all data sources, a solution to address the problem was created that included designing a curriculum for Math Lab, requiring communication between Math Lab teachers and general classroom math teachers, and professional development of the Math Lab teacher about teaching remedial classes

    The Association between the Use of \u3cem\u3eAccelerated Math\u3c/em\u3e and Students\u27 Math Achievement.

    Get PDF
    The purpose of this study was to explore the relationship between time spent on a computer managed integrated learning system entitled Accelerated Math and traditional mathematics instruction on achievement as measured by standardized achievement tests of elementary school students. The variables of ability level, special education, grade, socioeconomic status, gender, classroom teacher, school attended, and degree of implementation were also considered. The population consisted of 542 students who were sixth, seventh, and eighth graders during the 2003-2004 school year and took the TerraNova each year. Data were gathered that covered the three-year period beginning in 2001 and ending in 2004. A t test for independent samples, analysis of variance (ANOVA), and analysis of covariance (ANCOVA) were used to identify the relationship between variables. The researcher’s investigation of the relationship between Accelerated Math use and mathematics achievement might assist educators in planning for use of technology as a supplement to traditional instruction. The information gathered from this research might be beneficial to other school systems seeking information on the relationship between a computermanaged integrated learning system and math achievement. The findings in this study were mixed. The use of Accelerated Math was associated with no effects and negative effects depending on the degree of implementation. The findings indicated that there were measurable differences in the performance of students who received Accelerated Math compared to students who did not receive Accelerated Math. Students who did not receive Accelerated Math had higher overall scores than students participating in the intervention. The study indicated that gender, special education, and ability groups did not have a significant interaction with the intervention (participation in Accelerated Math). The research revealed that there was a socioeconomic status interaction intervention with proficiency scores. The study revealed that there was a significant intervention interaction with school, teacher, and grade. There was a significant interaction intervention for both proficiency and value-added scores for each of these three independent variables. In addition, the research revealed that the degree of implementation was a significant factor in students\u27 achievement

    Using electronic technology in the dynamic testing of young primary school children: Predicting school achievement

    Get PDF
    This study aimed to combine the use of electronic technology and dynamic testing to overcome the limitations of conventional static testing, and adapt more closely to children's individual needs. We investigated the effects of a newly developed computerized series completion test using a dynamic testing approach and its relation to school achievement. The study utilized a pre-test-training post-test control-group design in which 164 children from grade 2 participated. To evaluate the additional effects of dynamic testing beyond the effects of (repeated) static testing of inductive reasoning on a tablet, half of the children were trained using a graduated prompts method, while the other half of the children only practiced solving the series completion task-items. The results showed that training with graduated prompts is effective in increasing the likelihood that children can solve series completion problems accurately. Furthermore, the number of prompts children needed during training, significantly predicted the performances of children on mathematics and technical reading tests. Teacher's judgments regarding their pupils' overall school performance and potential for learning, however, did not correlate significantly with the dynamic post-test score of the series completion test, which seemed to indicate that dynamic testing provides teachers with new information about the learning progress of individuals.Pathways through Adolescenc

    Self-regulated Learning (SRL) Microanalysis for Mathematical Problem Solving: a Comparison of a SRL Event Measure, Questionnaires, and a Teacher Rating Scale

    Get PDF
    The current dissertation examined the validity of a context-specific assessment tool, called Self-regulated learning (SRL) microanalysis, for measuring self-regulated learning (SRL) during mathematical problem solving. SRL microanalysis is a structured interview that entails assessing respondents\u27 regulatory processes as they engage with a task of interest. Participants for this dissertation consisted of 83 eighth grade students attending a large urban school district in Midwestern USA. Students were administered the SRL microanalytic interview while completing a set of mathematical word problems to provide a measure of their real-time thoughts and regulatory behaviors. The SRL microanalytic interview targeted the SRL processes of goal-setting, strategic planning, strategy use, metacognitive monitoring, attributions, and adaptive inferences. In addition, students completed two questionnaires measuring SRL strategy use, and one questionnaire measuring self-esteem. The participant\u27s mathematics teacher completed a teacher rating scale of SRL for each participant. Mathematical skill was measured with three measures including a three item measure of mathematical problem solving skill completed during the SRL microanalytic interview, a fifteen item posttest of mathematical problem solving skill completed two weeks after the SRL microanalytic interview, and a standardized test of mathematics skill. The primary objectives of this dissertation were to compare the newly developed SRL microanalytic interview to more traditional measures of SRL including two self-report questionnaires measuring adaptive and maladaptive SRL and a teacher rating scale of SRL. In addition, the current dissertation examined whether SRL microanalysis would diverge from a theoretically unrelated construct such as self-esteem. Finally, the primary interest of the current dissertation was to examine the relative predictive validity of SRL microanalysis and SRL questionnaires. The predictive validity was compared across three related but distinct mathematics outcomes including a short set of mathematical problem solving items, a more comprehensive posttest of MPS problem solving skill, and performance on a standardized mathematics test. The results of this study revealed that SRL microanalysis did not relate to self-report questionnaires measuring adaptive or maladaptive SRL or teacher ratings of SRL. The SRL microanalytic interview diverged from the theoretically unrelated measure of self-esteem. Finally, after controlling for prior achievement and SRL questionnaires, the SRL microanalytic interview explained a significant amount of unique variation for all three mathematics outcomes. Furthermore, the SRL microanalytic protocol emerged as a superior predictor of all three mathematics outcomes compared to SRL questionnaires

    A Comparative Study of the Effects of Computer-Assisted Instruction on the Reading Achievement of First Graders

    Get PDF
    With reading proficiently by the end of third grade as a common goal, many school districts are exploring options to enhance early reading instruction. The purpose of this study was to investigate whether the supplemental, computer-assisted reading program i-Ready would significantly affect first grade students’ reading achievement. Participants (n=159) were first graders at two elementary schools - treatment (n= 82) and comparison n= 77). An independent samples t-test was used to compare the mid-year reading achievement scores of the treatment and comparison groups and found no statistically significant differences between groups. Following 10 weeks of twice-weekly 45-minute sessions of i-Ready reading instruction for the treatment group, an independent samples t-test showed that no statistically significant differences in reading achievement existed between the treatment and comparison groups. Several possibilities for this finding are discussed
    • 

    corecore