5 research outputs found

    Exploring the potential, limitations and use of objective questions in advanced calculus

    Get PDF
    This paper describes our experiences with authoring and trialling questions in advanced calculus topics, namely ordinary differential equations, Laplace transforms and Fourier series. These topics are generally taught at the end of the first year or during the second year of a mathematics or engineering undergraduate degree. We expect that many of the lessons learned here will apply to other conceptually-advanced mathematical and scientific content. Typically, what is significant for such content is that many skills are needed from previous exposure to calculus and algebra, and that paper-based questions at this level tend to be more abstract, holistic and open-ended, requiring the sort of flexibility in marking generally associated with human markers. For objective, and therefore more constrained questions, we do not know what is feasible and whether or not questions on advanced topics will actually test the skills they are designed to test. For example, a student may carry out e.g. a Laplace transform correctly, but make an elementary algebraic mistake near the end; this would be easily recognised by a human marker, but simply marked wrong by any current CAA system which cannot assess the (generally handwritten) intermediate steps in a student’s solution. Conversely, any question that can be marked by a CAA system is likely to be structured or scaffolded (e.g. by asking for intermediate steps explicitly) so that the original requirement on the student to devise a solution strategy is lost. This paper explores what can be asked effectively: facility with such questions is a necessary (but not sufficient) condition for students to master more advanced topics, so some sort of blended assessment (with human markers) may still be needed for higher-level skills. We describe the process of authoring higher-level objective and report of the experience of running the questions with our second year cohort, including an analysis of the answer files produced. Our evidence suggests that the assessments were useful to students in establishing a solid foundation of skills, mainly by being encouraged, or even forced, to engage with the extensive feedback screens

    Setting objective tests in mathematics using QM perception

    Get PDF
    We here describe technical issues in setting objective tests in various areas of mathematics using Question Mark Perception’s QML language and format files, coupled with MathML mathematics mark-up and the Scalable Vector Graphics (SVG) syntax for producing diagrams. The plain text MathML and SVG coding can replace graphics files commonly used to display equations and diagrams in CAA packages and web pages, and have the overwhelming advantage that random parameters can be dropped into the interpreted plain text at runtime, thereby producing many millions of realisations of the underlying question style

    Issues with setting online objective mathematics questions and testing their efficacy

    Get PDF
    The Mathletics database now comprises many mathematical topics from GCSE to level 2 undergraduate. The aim of this short paper is to document, explore and provide some solutions to the pedagogic issues we are facing whilst setting online objective questions across this range. Technical issues are described in the companion paper by Ellis, Greenhow and Hatt (2006). That paper refers to “question styles to stress that we author according to the pedagogic and algebraic structure of the content of a question; random parameters are chosen at runtime ... This results in each style having thousands, or even millions, of realisations seen by the users.” With this emphasis, and with new topics being included, new question types beyond the usual multi-choice (MC) etc have been developed to ask appropriate and challenging questions. We feel that their pedagogic structure (and underlying code) is widely applicable to testing beyond the scope of Mathematics. This paper describes three of the new question types: Word Input, Responsive Numerical Input and 4/True/False/Undecidable/Statement/Property. Of generic importance is the fact that each of these question types can include post-processing of submitted answers; sample Javascript coding that checks the validity of the input(s) before marking takes place is described. In common with most of the rest of the question style’s content this could be exported to other CAA systems. Ellis et al (2005) and Gill & Greenhow (2006) describe initial results of a trial of level 1 undergraduate mechanics questions. This academic year we have expanded the range of tests to foundation and level 1 undergraduate algebra and calculus, involving several hundred students. First and foremost we have underlined the value of Random Numerical Input (RNI) question types compared with traditional Numerical Input (NI) types for which answer files resulting from questions with randomised parameters are exceptionally difficult to interpret. Despite our current lack of a consistent and fullymeaningful way of encoding the mal-rules within the question outcome metadata, mal-rule-based question types (MC, RNI etc) have been analysed in terms of difficulty, discrimination and item analysis. In the case of multiplechoice questions any weaknesses are separately identified as skill-based or conceptual

    Exportable technologies: MathML and SVG objects for CAA and web content

    Get PDF
    The aim of this short paper is to provide an update on our experiences with using Mathematical Mark-up Language (MathML) and Scalable Vector Graphics (SVG) within “Mathletics” – a suite of mathematics and statistics objective question styles written within Perception’s QML language/Javascript. We refer here to question style to stress that we author according to the pedagogic and algebraic structure of a questions’ content; random parameters are chosen at runtime and included within all elements of the question and feedback, including the plain text source for MathML and SVG. This results in each style having thousands, or even millions, of realisations seen by the users. Much of what we have developed exists in template files that contain functions called by any question style within the database; such functions are therefore independent of any particular web-based system (we user Perception), indeed, ordinary web pages. We reported on some of these functions at the last CAA Conference (Baruah, Ellis, Gill and Greenhow 2005) whilst basic concepts and terminology for MathML and SVG are introduced by Ellis (2005). It should also be noted that the user’s choice of font colours & sizes, and background colour, are all incorporated within the MathML and SVG content. This means that equations and diagrams will be accessible to those requiring larger/differently-coloured versions of the content’s default options

    Recent developments in setting and using objective tests in mathematics using QM Perception

    Get PDF
    This short paper builds upon work described at the last CAA Conference, Greenhow & Gill (2004), in setting objective tests in various areas of mathematics using Question Mark Perception. Current activities continue to exploit the QML language and template files, coupled with MathML mathematics mark-up and the Scalable Vector Graphics (SVG) syntax for producing diagrams. There are many advantages to using such mark-up languages, primarily the use of random parameters at runtime that thereby produce dynamic equations, distracters, feedback and diagrams. An unlooked for, but welcome, advantage, is that one can also resize and recolour these elements by reading the preferences that have been set up in a user-defined cookie. This means that “reasonable provision” for disabled students as required by the SENDA legislation, is built-in. The MathML and SVG technology can be exported to any web-based system, or indeed ordinary web pages that can provide an inexhaustible set of realisations at the click of the reload button. Being central to the display of mathematics on the web, MathML’s WebEQ applet has recently been considerably extended to include graphing of MathML expressions, naturalistic input of equations with syntax checking and math-action tags. These math-action tags can be used to define a specific part of an equation, and mouse actions can then be acted upon, for example to provide a commentary on that part of the equation, toggle to another equation (perhaps a derivation of the tagged term or similar) or, possibly, to set a variable that can be used for marking (as in a hot spot question). The first part of this paper will show how these new facilities can be input into new question types for effective questions and feedback design. It is clear that much useful technology already exists, but setting effective questions that benefit students’ learning requires equal attention to their content and pedagogy. The second part of this paper looks at a possible methodology for setting much more advanced questions than hitherto, looking closely at an example from the ordinary differential equations section of Mathletics. The third part of this paper looks at a series of experiments with a first year mechanics group at Brunel University, as part of the Formative Assessment and Feedback (FAST) project. Students’ reactions were studied, especially the effect of the feedback on their subsequent behaviour when faced with similar/dissimilar questions after a variable time delay. Students spent a lot of time and energy considering the feedback provided, sometimes copying it down or printing it out. Somewhat surprisingly, it seems that a “learning resource” has actually been written, whose formative nature is of equal or more importance than the assessment function originally intended. It can be concluded that plentiful formative feedback is of great importance in the students’ ability to learn mathematics from the tests, rather than simply get their grades or marks in an efficient manner
    corecore