725,202 research outputs found

    Authentic student inquiry: the mismatch between the intended curriculum and the student-experienced curriculum

    Get PDF
    As a means of achieving scientific literacy goals in society, the last two decades have witnessed international science curriculum redevelopment that increasingly advocates a 'new look' inquiry-based approach to learning. This paper reports on the nature of the student-experienced curriculum where secondary school students are learning under a national curriculum that is intent on promoting students' knowledge and capabilities in authentic scientific inquiry, that is, inquiry that properly reflects that practiced by members of scientific communities. Using a multiple case study approach, this study found that layers of curriculum interpretation from several 'sites of influence' both outside and inside of the schools have a strong bearing on the curriculum enacted by teachers and actually experienced by the students, and runs counter to the aims of the national curriculum policy. Over-emphasis on fair testing limits students' exposure to the full range of methods that scientists use in practice, and standards-based assessment using planning templates, exemplar assessment schedules and restricted opportunities for full investigations in different contexts tends to reduce student learning about experimental design to an exercise in 'following the rules'. These classroom realities have implications for students' understanding of the nature of authentic scientific inquiry and support claims that school science is still far removed from real science

    The Road Ahead for State Assessments

    Get PDF
    The adoption of the Common Core State Standards offers an opportunity to make significant improvements to the large-scale statewide student assessments that exist today, and the two US DOE-funded assessment consortia -- the Partnership for the Assessment of Readiness for College and Careers (PARCC) and the SMARTER Balanced Assessment Consortium (SBAC) -- are making big strides forward. But to take full advantage of this opportunity the states must focus squarely on making assessments both fair and accurate.A new report commissioned by the Rennie Center for Education Research & Policy and Policy Analysis for California Education (PACE), The Road Ahead for State Assessments, offers a blueprint for strengthening assessment policy, pointing out how new technologies are opening up new possibilities for fairer, more accurate evaluations of what students know and are able to do. Not all of the promises can yet be delivered, but the report provides a clear set of assessment-policy recommendations. The Road Ahead for State Assessments includes three papers on assessment policy.The first, by Mark Reckase of Michigan State University, provides an overview of computer adaptive assessment. Computer adaptive assessment is an established technology that offers detailed information on where students are on a learning continuum rather than a summary judgment about whether or not they have reached an arbitrary standard of "proficiency" or "readiness." Computer adaptivity will support the fair and accurate assessment of English learners (ELs) and lead to a serious engagement with the multiple dimensions of "readiness" for college and careers.The second and third papers give specific attention to two areas in which we know that current assessments are inadequate: assessments in science and assessments for English learners.In science, paper-and-pencil, multiple choice tests provide only weak and superficial information about students' knowledge and skills -- most specifically about their abilities to think scientifically and actually do science. In their paper, Chris Dede and Jody Clarke-Midura of Harvard University illustrate the potential for richer, more authentic assessments of students' scientific understanding with a case study of a virtual performance assessment now under development at Harvard. With regard to English learners, administering tests in English to students who are learning the language, or to speakers of non-standard dialects, inevitably confounds students' content knowledge with their fluency in Standard English, to the detriment of many students. In his paper, Robert Linquanti of WestEd reviews key problems in the assessment of ELs, and identifies the essential features of an assessment system equipped to provide fair and accurate measures of their academic performance.The report's contributors offer deeply informed recommendations for assessment policy, but three are especially urgent.Build a system that ensures continued development and increased reliance on computer adaptive testing. Computer adaptive assessment provides the essential foundation for a system that can produce fair and accurate measurement of English learners' knowledge and of all students' knowledge and skills in science and other subjects. Developing computer adaptive assessments is a necessary intermediate step toward a system that makes assessment more authentic by tightly linking its tasks and instructional activities and ultimately embedding assessment in instruction. It is vital for both consortia to keep these goals in mind, even in light of current technological and resource constraints.Integrate the development of new assessments with assessments of English language proficiency (ELP). The next generation of ELP assessments should take into consideration an English learners' specific level of proficiency in English. They will need to be based on ELP standards that sufficiently specify the target academic language competencies that English learners need to progress in and gain mastery of the Common Core Standards. One of the report's authors, Robert Linquanti, states: "Acknowledging and overcoming the challenges involved in fairly and accurately assessing ELs is integral and not peripheral to the task of developing an assessment system that serves all students well. Treating the assessment of ELs as a separate problem -- or, worse yet, as one that can be left for later -- calls into question the basic legitimacy of assessment systems that drive high-stakes decisions about students, teachers, and schools." Include virtual performance assessments as part of comprehensive state assessment systems. Virtual performance assessments have considerable promise for measuring students' inquiry and problem-solving skills in science and in other subject areas, because authentic assessment can be closely tied to or even embedded in instruction. The simulation of authentic practices in settings similar to the real world opens the way to assessment of students' deeper learning and their mastery of 21st century skills across the curriculum. We are just setting out on the road toward assessments that ensure fair and accurate measurement of performance for all students, and support for sustained improvements in teaching and learning. Developing assessments that realize these goals will take time, resources and long-term policy commitment. PARCC and SBAC are taking the essential first steps down a long road, and new technologies have begun to illuminate what's possible. This report seeks to keep policymakers' attention focused on the road ahead, to ensure that the choices they make now move us further toward the goal of college and career success for all students. This publication was released at an event on May 16, 2011

    Multiple-choice tests in teaching software engineers : how applied and how applicable?

    Get PDF
    Although computer-aided assessment has been developed since late 1980’s, advantages and disadvantages of multiple-choice tests (MCTs), as opposed to traditional academic techniques (such as essays), are still greatly debated. This paper overviews current practice of application of MCTs in Computing and generally outlines their applicability in teaching Software Engineering, as a subject area in HE

    Review of standards in summer 2009: GCSE Science and GCSE Additional Science: joint Ofqual and DCELLS investigation

    Get PDF

    Evaluating the successful implementation of evidence into practice using the PARiHS framework : theoretical and practical challenges

    Get PDF
    Background The PARiHS framework (Promoting Action on Research Implementation in Health Services) has proved to be a useful practical and conceptual heuristic for many researchers and practitioners in framing their research or knowledge translation endeavours. However, as a conceptual framework it still remains untested and therefore its contribution to the overall development and testing of theory in the field of implementation science is largely unquantified. Discussion This being the case, the paper provides an integrated summary of our conceptual and theoretical thinking so far and introduces a typology (derived from social policy analysis) used to distinguish between the terms conceptual framework, theory and model – important definitional and conceptual issues in trying to refine theoretical and methodological approaches to knowledge translation. Secondly, the paper describes the next phase of our work, in particular concentrating on the conceptual thinking and mapping that has led to the generation of the hypothesis that the PARiHS framework is best utilised as a two-stage process: as a preliminary (diagnostic and evaluative) measure of the elements and sub-elements of evidence (E) and context (C), and then using the aggregated data from these measures to determine the most appropriate facilitation method. The exact nature of the intervention is thus determined by the specific actors in the specific context at a specific time and place. In the process of refining this next phase of our work, we have had to consider the wider issues around the use of theories to inform and shape our research activity; the ongoing challenges of developing robust and sensitive measures; facilitation as an intervention for getting research into practice; and finally to note how the current debates around evidence into practice are adopting wider notions that fit innovations more generally. Summary The paper concludes by suggesting that the future direction of the work on the PARiHS framework is to develop a two-stage diagnostic and evaluative approach, where the intervention is shaped and moulded by the information gathered about the specific situation and from participating stakeholders. In order to expedite the generation of new evidence and testing of emerging theories, we suggest the formation of an international research implementation science collaborative that can systematically collect and analyse experiences of using and testing the PARiHS framework and similar conceptual and theoretical approaches. We also recommend further refinement of the definitions around conceptual framework, theory, and model, suggesting a wider discussion that embraces multiple epistemological and ontological perspectives

    Topic and background knowledge effects on performance in speaking assessment

    Get PDF
    This study explores the extent to which topic and background knowledge of topic affect spoken performance in a high-stakes speaking test. It is argued that evidence of a substantial influence may introduce construct-irrelevant variance and undermine test fairness. Data were collected from 81 non-native speakers of English who performed on 10 topics across three task types. Background knowledge and general language proficiency were measured using self-report questionnaires and C-tests respectively. Score data were analysed using many-facet Rasch measurement and multiple regression. Findings showed that for two of the three task types, the topics used in the study generally exhibited difficulty measures which were statistically distinct. However, the size of the differences in topic difficulties was too small to have a large practical effect on scores. Participants’ different levels of background knowledge were shown to have a systematic effect on performance. However, these statistically significant differences also failed to translate into practical significance. Findings hold implications for speaking performance assessment

    Class tournament as an assessment method in physics courses : a pilot study

    Get PDF
    Testing knowledge is an integral part of a summative assessment at schools. It can be performed in many different ways. In this study we propose assessment of physics knowledge by using a class tournament approach. Prior to a statistical analysis of the results obtained over a tournament organized in one of Polish high schools, all its specifics are discussed at length, including the types of questions assigned, as well as additional self- and peer-evaluation questionnaires, constituting an integral part of the tournament. The impact of the tournament upon student improvement is examined by confronting the results of a post-test with pre-tournament students’ achievements reflected in scores earned in former, tests written by the students in experimental group and their colleagues from control group. We also present some of students’ and teachers’ feedback on the idea of a tournament as a tool of assessment. Both the analysis of the tournament results and the students’ and teachers’ opinions point to at least several benefits of our approach

    Learning from the best: examples of best practice from providers of apprenticeships in under performing vocational areas

    Get PDF

    Software Measurement Activities in Small and Medium Enterprises: an Empirical Assessment

    Get PDF
    An empirical study for evaluating the proper implementation of measurement/metric programs in software companies in one area of Turkey is presented. The research questions are discussed and validated with the help of senior software managers (more than 15 years’ experience) and then used for interviewing a variety of medium and small scale software companies in Ankara. Observations show that there is a common reluctance/lack of interest in utilizing measurements/metrics despite the fact that they are well known in the industry. A side product of this research is that internationally recognized standards such as ISO and CMMI are pursued if they are a part of project/job requirements; without these requirements, introducing those standards to the companies remains as a long-term target to increase quality
    • 

    corecore