8 research outputs found
Item selection and application in Higher Education
Over the past ten years the use of computer assisted assessment in Higher Education (HE) has grown. The majority of this expansion has been based around the application of multiple-choice items (Stephens and Mascia, 1997). However, concern has been expressed about the use of multiple choice items to test higher order skills.
The Tripartite Interactive Assessment Development (TRIAD) system (Mackenzie, 1999) has been developed by the Centre for Interactive Assessment Development (CIAD) at the University of Derby. It is a delivery platform that allows the production of more complex items. We argue that the use of complex item formats such as those available in TRIADs could enhance validity and produce assessments with features not present in pencil and paper tests (cf. Huff and Sireci, 2001).
CIAD was keen to evaluate tests produced in TRIADs and so sought the aid of the National Foundation for Educational Research (NFER). As part of an initial investigation a test was compiled for a year one Systems Analysis module. This test was produced by the tutor (in consultation with CIAD) and contained a number of item types; both multiple-choice items and complex TRIADs items.
Data from the test were analysed using Classical Test Theory and Item Response Theory models. The results of the analysis led to a number of interesting observations. The multiple-choice items showed lower reliability. This was surprising since these items had been mainly obtained from published sources, with few written by the test constructor. The fact that the multiple-choice items showed lower reliability compared to more complex item types may flag two important points for the unwary test developer: the quality of published items may be insufficient to allow their inclusion in high-quality tests, and furthermore, the production of reliable multiple-choice items is a difficult skill to learn. In addition it may not be appropriate to attempt to stretch multiple-choice items by using options such as ‘all’ or ‘none of the above’. The evidence from this test seems to suggest that multiple-choice items may not be appropriate to test outcomes at undergraduate level
Additional file 7: of Impact of the Macmillan specialist Care at Home service: a mixed methods evaluation across six sites
Place of death by site - SDT data. (DOCX 27 kb
Additional file 6: of Impact of the Macmillan specialist Care at Home service: a mixed methods evaluation across six sites
Patient reported symptom burden - IPOS data. (DOCX 78 kb
Additional file 9: of Impact of the Macmillan specialist Care at Home service: a mixed methods evaluation across six sites
Meeting the patient need – VOICES-SF questionnaire. (DOCX 156 kb
Additional file 3: of Impact of the Macmillan specialist Care at Home service: a mixed methods evaluation across six sites
Demographic and clinical details of patients referred to Macmillan Specialist Care at Home. (DOCX 13 kb
Additional file 4: of Impact of the Macmillan specialist Care at Home service: a mixed methods evaluation across six sites
Referral to the six Macmillan Specialist Care at Home services – SDT dataa. (DOCX 13 kb
Additional file 2: of Impact of the Macmillan specialist Care at Home service: a mixed methods evaluation across six sites
Breakdown of data collection by Innovation. (DOCX 13 kb
Additional file 1: of Impact of the Macmillan specialist Care at Home service: a mixed methods evaluation across six sites
Data collection methods for evaluating Macmillan Specialist Care at Home (DOC 34 kb