86,706 research outputs found

    Testing the use of grammar: Beyond grammatical accuracy

    Get PDF
    Udostępnienie publikacji Wydawnictwa Uniwersytetu Łódzkiego finansowane w ramach projektu „Doskonałość naukowa kluczem do doskonałości kształcenia”. Projekt realizowany jest ze środków Europejskiego Funduszu Społecznego w ramach Programu Operacyjnego Wiedza Edukacja Rozwój; nr umowy: POWER.03.05.00-00-Z092/17-00

    The Road Ahead for State Assessments

    Get PDF
    The adoption of the Common Core State Standards offers an opportunity to make significant improvements to the large-scale statewide student assessments that exist today, and the two US DOE-funded assessment consortia -- the Partnership for the Assessment of Readiness for College and Careers (PARCC) and the SMARTER Balanced Assessment Consortium (SBAC) -- are making big strides forward. But to take full advantage of this opportunity the states must focus squarely on making assessments both fair and accurate.A new report commissioned by the Rennie Center for Education Research & Policy and Policy Analysis for California Education (PACE), The Road Ahead for State Assessments, offers a blueprint for strengthening assessment policy, pointing out how new technologies are opening up new possibilities for fairer, more accurate evaluations of what students know and are able to do. Not all of the promises can yet be delivered, but the report provides a clear set of assessment-policy recommendations. The Road Ahead for State Assessments includes three papers on assessment policy.The first, by Mark Reckase of Michigan State University, provides an overview of computer adaptive assessment. Computer adaptive assessment is an established technology that offers detailed information on where students are on a learning continuum rather than a summary judgment about whether or not they have reached an arbitrary standard of "proficiency" or "readiness." Computer adaptivity will support the fair and accurate assessment of English learners (ELs) and lead to a serious engagement with the multiple dimensions of "readiness" for college and careers.The second and third papers give specific attention to two areas in which we know that current assessments are inadequate: assessments in science and assessments for English learners.In science, paper-and-pencil, multiple choice tests provide only weak and superficial information about students' knowledge and skills -- most specifically about their abilities to think scientifically and actually do science. In their paper, Chris Dede and Jody Clarke-Midura of Harvard University illustrate the potential for richer, more authentic assessments of students' scientific understanding with a case study of a virtual performance assessment now under development at Harvard. With regard to English learners, administering tests in English to students who are learning the language, or to speakers of non-standard dialects, inevitably confounds students' content knowledge with their fluency in Standard English, to the detriment of many students. In his paper, Robert Linquanti of WestEd reviews key problems in the assessment of ELs, and identifies the essential features of an assessment system equipped to provide fair and accurate measures of their academic performance.The report's contributors offer deeply informed recommendations for assessment policy, but three are especially urgent.Build a system that ensures continued development and increased reliance on computer adaptive testing. Computer adaptive assessment provides the essential foundation for a system that can produce fair and accurate measurement of English learners' knowledge and of all students' knowledge and skills in science and other subjects. Developing computer adaptive assessments is a necessary intermediate step toward a system that makes assessment more authentic by tightly linking its tasks and instructional activities and ultimately embedding assessment in instruction. It is vital for both consortia to keep these goals in mind, even in light of current technological and resource constraints.Integrate the development of new assessments with assessments of English language proficiency (ELP). The next generation of ELP assessments should take into consideration an English learners' specific level of proficiency in English. They will need to be based on ELP standards that sufficiently specify the target academic language competencies that English learners need to progress in and gain mastery of the Common Core Standards. One of the report's authors, Robert Linquanti, states: "Acknowledging and overcoming the challenges involved in fairly and accurately assessing ELs is integral and not peripheral to the task of developing an assessment system that serves all students well. Treating the assessment of ELs as a separate problem -- or, worse yet, as one that can be left for later -- calls into question the basic legitimacy of assessment systems that drive high-stakes decisions about students, teachers, and schools." Include virtual performance assessments as part of comprehensive state assessment systems. Virtual performance assessments have considerable promise for measuring students' inquiry and problem-solving skills in science and in other subject areas, because authentic assessment can be closely tied to or even embedded in instruction. The simulation of authentic practices in settings similar to the real world opens the way to assessment of students' deeper learning and their mastery of 21st century skills across the curriculum. We are just setting out on the road toward assessments that ensure fair and accurate measurement of performance for all students, and support for sustained improvements in teaching and learning. Developing assessments that realize these goals will take time, resources and long-term policy commitment. PARCC and SBAC are taking the essential first steps down a long road, and new technologies have begun to illuminate what's possible. This report seeks to keep policymakers' attention focused on the road ahead, to ensure that the choices they make now move us further toward the goal of college and career success for all students. This publication was released at an event on May 16, 2011

    State of the art review : language testing and assessment (part two).

    Get PDF
    In Part 1 of this two-part review article (Alderson & Banerjee, 2001), we first addressed issues of washback, ethics, politics and standards. After a discussion of trends in testing on a national level and in testing for specific purposes, we surveyed developments in computer-based testing and then finally examined self-assessment, alternative assessment and the assessment of young learners. In this second part, we begin by discussing recent theories of construct validity and the theories of language use that help define the constructs that we wish to measure through language tests. The main sections of the second part concentrate on summarising recent research into the constructs themselves, in turn addressing reading, listening, grammatical and lexical abilities, speaking and writing. Finally we discuss a number of outstanding issues in the field

    Improving performance in quantum mechanics with explicit incentives to correct mistakes

    Full text link
    An earlier investigation found that the performance of advanced students in a quantum mechanics course did not automatically improve from midterm to final exam on identical problems even when they were provided the correct solutions and their own graded exams. Here, we describe a study, which extended over four years, in which upper-level undergraduate students in a quantum physics course were given four identical problems in both the midterm exam and final exam. Approximately half of the students were given explicit incentives to correct their mistakes in the midterm exam. In particular, they could get back up to 50\% of the points lost on each midterm exam problem. The solutions to the midterm exam problems were provided to all students in both groups but those who corrected their mistakes were provided the solution after they submitted their corrections to the instructor. The performance on the same problems on the final exam suggests that students who were given incentives to correct their mistakes significantly outperformed those who were not given an incentive. The incentive to correct the mistakes had greater impact on the final exam performance of students who had not performed well on the midterm exam.Comment: accepted for publication Physical Review Physics Education Research in 2016, 20 pages, PACS: 01.40Fk,01.40.gb,01.40G-, Keywords: physics education research, learning from mistakes, pedagogy, quantum mechanics, teaching, learnin

    Active-Learning Methods to Improve Student Performance and Scientific Interest in a Large Introductory Course

    Get PDF
    Teaching methods that are often recommended to improve the learning environment in college science courses include cooperative learning, adding inquiry-based activities to traditional lectures, and engaging students in projects or investigations. Two questions often surround these efforts: 1) can these methods be used in large classes; and 2) how do we know that they are increasing student learning? This study, from the University of Massachusetts, describes how education researchers have transformed the environment of a large-enrollment oceanography course (600 students) by modifying lectures to include cooperative learning via interactive in-class exercises and directed discussion. Assessments were redesigned as "two-stage" exams with a significant collaborative component. Results of student surveys, course evaluations, and exam performance demonstrate that learning of the subject under these conditions has improved. Student achievement shows measurable and statistically significant increases in information recall, analytical skills, and quantitative reasoning. There is evidence from both student surveys and student interview comments that for the majority of students, the course increased their interest in science -- a difficult effect to achieve with this population. Educational levels: Graduate or professional, Graduate or professional

    INVESTIGATING CORRELATION BETWEEN READING STRATEGIES AND READING ACHIEVEMENT ACROSS LEARNING STYLES

    Get PDF
    The study was aimed to analyze the interrelationship between metacognitive reading strategy and reading achievements, the correlation between cognitive reading strategy and reading achievement, and to know the effect between metacognitive and cognitive strategy used by learners across their learning styles. This study used correlation research. The number of populations was 315. The researcher chose 113 Senior High EFL students at MA Nurul Jadid. Questionnaire and reading comprehension test were used to collect data. The researcher used two questionnaires to measure reading strategies used by the students and students’ learning styles. SPSS V. 20 was used to analyze questionnaires’ data. Descriptive statistics was applied to calculate the mean and standard deviation of 40 individual reading strategies. The results were: metacognitive and cognitive strategies were used in high and medium level when students did the tests. Metacognitive strategy significantly correlated with reading achievement where correlation coefficient is greater than critical value of correlation coefficient while cognitive strategy does not relate mutually to reading achievements. Then, reading strategies significantly affected students’ reading achievement
    corecore