3 research outputs found
Assessing mathematical problem solving using comparative judgement
There is an increasing demand from employers and universities for school leavers to be able to apply their mathematical knowledge to problem solving in varied and unfamiliar contexts. These aspects are however neglected in most examinations of mathematics and, consequentially, in classroom teaching. One barrier to the inclusion of mathematical problem solving in assessment is that the skills involved are difficult to define and assess objectively. We present two studies that test a method called comparative judgement (CJ) that might be well suited to assessing mathematical problem solving. CJ is an alternative to traditional scoring that is based on collective expert judgements of studentsâ work rather than item-by-item scoring schemes. In Study 1 we used CJ to assess traditional mathematics tests and found it performed validly and reliably. In Study 2 we used CJ to assess mathematical problem-solving tasks and again found it performed validly and reliably. We discuss the implications of the results for further research and the implications of CJ for the design of mathematical problem-solving tasks
The problem of assessing problem solving: can comparative judgement help?
This definitive version of this paper is available at Springerlink: http:dx.doi.org/10.1007/s10649-015-9607-1School mathematics examination papers are typically dominated by short, structured items
that fail to assess sustained reasoning or problem solving. A contributory factor to this
situation is the need for student work to be marked reliably by a large number of markers of
varied experience and competence. We report a study that tested an alternative approach to
assessment, called comparative judgement, which may represent a superior method for
assessing open-ended questions that encourage a range of unpredictable responses. An
innovative problem solving examination paper was specially designed by examiners,
evaluated by mathematics teachers, and administered to 750 secondary school students of
varied mathematical achievement. The studentsâ work was then assessed by mathematics
education experts using comparative judgement as well as a specially designed, resourceintensive
marking procedure. We report two main findings from the research. First, the
examination paper writers, when freed from the traditional constraint of producing a mark
scheme, designed questions that were less structured and more problem-based than is typical
in current school mathematics examination papers. Second, the comparative judgement
approach to assessing the student work proved successful by our measures of inter-rater
reliability and validity. These findings open new avenues for how school mathematics, and
indeed other areas of the curriculum, might be assessed in the future