9 research outputs found
An evaluation of the formative functions of a large-scale on-screen assessment
The key stage 3 (KS3) information and communication technology (ICT) test
is an on-screen assessment that is being developed by the Qualifications and
Curriculum Authority (QCA) under contract to the Department for Education
and Skills (DfES). It is intended that this test will be run on a statutory basis
from 2008; providing a summary of every child’s attainment in ICT at the end
of the lower secondary phase on schooling
A Reflective study on the use of CAA to test knowledge and understanding of mineralogy
Use of multiple choice question based computer aided assessment to assess level-one (first year) mineralogy produced a reliable assessment, though with rather poor scores. The use of negative marking contributed to this, and also drew negative comment from the student cohort. Reflection on these outcomes led to the use of multiple response questions, which performed better and did not encourage negative student feedback. CAA performance does not equate very well with practical coursework assessment. However, these two assessments are addressing different learning outcomes and so this disparity is not surprising. Statistical analysis suggests that these two forms of assessment give a truer indication of a student’s ability when they are combined. It enforces the conclusion that appropriate assessment tools should be used for stated learning outcomes and that multimodal assessment is best
Principles for the regulation of e-assessment an update on developments
Principles for the regulation of e-assessment an update on development
Sophisticated Tasks I E-Assessment: What are they? And what are their benefits?
This paper asserts the importance of e-assessment. It further suggests that assessment questions and tasks will change substantially as the art of e-assessment progresses. The paper then exemplifies sophisticated e-assessment tasks, and seeks to identify aspects of a definition of them.
Next, some key claims for sophisticated e-assessment tasks are summarised and evaluated. These claims are:
• That sophisticated e-assessment tasks can be used to assess novel constructs
• That sophisticated e-assessment tasks can be used to address summative and formative assessment purposes.
In the final part of the paper, issues arising from the paper’s findings are discussed and necessary areas for further research are noted
Item selection and application in Higher Education
Over the past ten years the use of computer assisted assessment in Higher Education (HE) has grown. The majority of this expansion has been based around the application of multiple-choice items (Stephens and Mascia, 1997). However, concern has been expressed about the use of multiple choice items to test higher order skills.
The Tripartite Interactive Assessment Development (TRIAD) system (Mackenzie, 1999) has been developed by the Centre for Interactive Assessment Development (CIAD) at the University of Derby. It is a delivery platform that allows the production of more complex items. We argue that the use of complex item formats such as those available in TRIADs could enhance validity and produce assessments with features not present in pencil and paper tests (cf. Huff and Sireci, 2001).
CIAD was keen to evaluate tests produced in TRIADs and so sought the aid of the National Foundation for Educational Research (NFER). As part of an initial investigation a test was compiled for a year one Systems Analysis module. This test was produced by the tutor (in consultation with CIAD) and contained a number of item types; both multiple-choice items and complex TRIADs items.
Data from the test were analysed using Classical Test Theory and Item Response Theory models. The results of the analysis led to a number of interesting observations. The multiple-choice items showed lower reliability. This was surprising since these items had been mainly obtained from published sources, with few written by the test constructor. The fact that the multiple-choice items showed lower reliability compared to more complex item types may flag two important points for the unwary test developer: the quality of published items may be insufficient to allow their inclusion in high-quality tests, and furthermore, the production of reliable multiple-choice items is a difficult skill to learn. In addition it may not be appropriate to attempt to stretch multiple-choice items by using options such as ‘all’ or ‘none of the above’. The evidence from this test seems to suggest that multiple-choice items may not be appropriate to test outcomes at undergraduate level
The use of PGP to provide secure email delivery of CAA results
An important component of any assessment procedure is the security of results and authentication of the examinee. Unfortunately the use of regular email for the delivery of CAA (Computer Assisted Assessment) results is not immune from these problems as regular email suffers from a number of potential security flaws.
When an email is sent across the Internet it is transmitted in a readable text format. This means that if an unauthorised user managed to access the message whilst in transit or while stored on an email server then they could easily read the email, or even alter the content of the message. Additionally regular email offers no form of authentication. It is possible for a user to send an email but make it look as though somebody else actually sent it.
To prevent these problems a number of software packages have been developed, one such program is PGP (Pretty Good Privacy). PGP can encrypt and sign an email message before it is sent, therefore providing the following security:
Prevent unauthorised users reading the message (privacy) •
•
•
Proof that the message has not been altered (integrity)
Confirmation of the origin of the message (authentication)
At the University of Liverpool a JISC (Joint Information Systems Committee) funded pilot project was setup to investigate the use of PGP to provide secure email delivery of CAA results
The eLearning place: progress report on a complete systems for learning and assessment
This short paper outlines the main features of The eLearning Place and describes the development of TestMaker by The eLearning Place partnership. TestMaker is an assessment creation tool written in Java adhering to QTI standards. It separates item- and test-development, and pools items by Learning Provider in an Oracle database
The formative use of e-assessment : some early implentations, and suggestions for how we might move on
This paper reviews research into the formative use of e-assessment. The
review groups implementations into three areas, and then suggests areas for
further research in each area. There are nine areas for further research in
total.
The discussion section examines the areas for further research to establish
commonalities between them. By this process, it proposes four key issues to
inform the future of formative e-assessment research.
The key issues are:
• Better defining those instances where formative e-assessment
provides particular benefit over and above benefits that would accrue
from the use of formative assessment in any medium.
• Being aware of – and attempting to avoid – formative e-assessment
implementations that represent a reduced or impoverished conception
of formative assessment.
• Being aware of circumstances in which the introduction of formative eassessment
could lead to increased burdens on classroom
practitioners.
• The need to understand how students will be required to adopt novel
roles (e.g. different ways of working and communicating) when using
formative e-assessment.
8
Finding appropriate methods to assure quality Computer-Based Assessment development in UK Higher Education
Finding appropriate methods to assure quality Computer-Based Assessment development in UK Higher Educatio