108,575 research outputs found
Recommended from our members
The Learning Grid and E-Assessment using Latent Semantic Analysis
E-assessment is an important component of e-learning and e-qualification. Formative and summative assessment serve different purposes and both types of evaluation are critical to the pedagogicalprocess. While students are studying, practicing, working, or revising, formative assessment provides direction, focus, and guidance. Summative assessment provides the means to evaluate a learner's achievement and communicate that achievement to interested parties. Latent Semantic Analysis (LSA) is a statistical method for inferring meaning from a text. Applications based on LSA exist that provide both summative and formative assessment of a learner's work. However, the huge computational needs are a major problem with this promising technique. This paper explains how LSA works, describes the breadth of existing applications using LSA, explains how LSA is particularly suited to e-assessment, and proposes research to exploit the potential computational power of the Grid to overcome one of LSA's drawbacks
Analysing assessment practice in higher education: how useful is the summative/formative divide as a tool?
A view of assessment as 'naturally' divided into the categories of formative and summative has become a taken-for-granted way of thinking about, talking about and organising assessment in universities, at least in the UK where the division is inscribed in national, institutional and departmental policy and guidance (eg. Quality Assurance Agency, http://www.qaa.ac.uk). In these documents summative and formative assessment tend to be understood as serving separate purposes with summative assessment understood as summing up the level of performance and formative assessment as feeding into future learning. We question the utility of the division in terms of better understanding assessment practices on the basis of an empirical study undertaken in a higher education institution in the UK. The aim of the Assessment Environments & Cultures project is to gain a better understanding of how academics assess and why they assess in the ways that they do. Interview and observational data have been collected from academics working in three subject areas: Design, Business and Applied Sciences. Initial analysis has focussed on the discourses in use and the subject positions taken up by academics when they talk about and undertake assessment. Analysis of our data suggests that, whilst academics used the categories of formative and summative to talk about their assessment practices, the distinction between assessment purposes may be 'messier' than the separate categories imply. Various examples from the project will be introduced to illustrate this point. This raises a number of questions in terms of researching assessment practices that will be raised for discussion at the roundtable. For example:Might it be useful to understand formative and summative assessment as occupying a shared and contested space rather than as distinct categories
‘Summative’ and ‘Formative’: Confused by the assessment terms?
The terms ‘formative’ and ‘summative’ when linked to assessment can cause confusion. Should these terms be dropped? Should we move on from them? This paper argues that it is the common shortening of the full and meaningful terms, ‘assessment for formative purposes’ and ‘assessment for summative purposes’ that contributes to a confusion over assessments, information and methods, particularly for pre-service teachers and those with less teaching experience. By being well-informed about both purpose and assessment activity, teachers will have greater clarity in understanding, communication and practice regarding these important and useful concepts
How convincing is alternative assessment for use in higher education?
The current preference for alternative assessment has been partly stimulated by recent evidence on learning which points to the importance of students' active engagement in the learning process. While alternative assessment may well fulfil the aims of formative assessment, its value in the summative assessment required in higher education is more problematic. If alternative assessment devices are to be used for summative purposes, the validity of alternative assessment has to be considered. The paper argues that task specification and marking consistency in alternative assessment can make comparability of performance difficult to effect, thereby leaving alternative assessment a less than convincing form for use in higher educatio
Struggling and juggling: a comparison of assessment loads in research and teaching-intensive universities
In spite of the rising tide of metrics in UK higher education, there has been scant attention paid to assessment loads, when evidence demonstrates that heavy demands lead to surface learning. Our study seeks to redress the situation by defining assessment loads and comparing them across research-and teaching intensive universities. We clarify the concept of ‘assessment load’ in response to findings about high volumes of summative assessment on modular degrees. We define assessment load across whole undergraduate degrees, according to four measures: the volume of summative assessment; volume of formative assessment; proportion of examinations to coursework; number of different varieties of assessment. All four factors contribute to the weight of an assessment load, and influence students’ approaches to learning. Our research compares programme assessment data from 73 programmes in 14 UK universities, across two institutional categories. Research-intensives have higher summative assessment loads and a greater proportion of examinations; teaching-intensives have higher varieties of assessment. Formative assessment does not differ significantly across both university groups. These findings pose particular challenges for students in different parts of the sector. Our study questions the wisdom that ‘more’ is always better, proposing that lighter assessment loads may make room for ‘slow’ and deep learning
Towards Security Goals in Summative E-Assessment Security
The general security goals of a computer system are known to include confidentiality, integrity and availability (C-I-A) which prevent critical assets from potential threats. The C-I-A security goals are well researched areas; however they may be insufficient to address all the needs of the summative e-assessment. In this paper, we do not discard the fundamental C-I-A security goals; rather we define security goals which are specific to summative e-assessment security
The use of computer-based assessments in a field biology module
Formative computer-based assessments (CBAs) for self-instruction were introduced into a Year-2 field biology module. These CBAs were provided in ‘tutorial’ mode where each question had context-related diagnostic feedback and tutorial pages, and a self-test mode where the same CBA returned only a score. The summative assessments remained unchanged and consisted of an unseen CBA and written reports of field investigations. When compared with the previous three year-cohorts, the mean score for the summative CBA increased after the introduction of formative CBAs, whereas mean scores for written reports did not change. It is suggested that the increase in summative CBA mean score reflects the effectiveness of the formative CBAs in widening the students’ knowledge base. Evaluation of all assessments using an Assessment Experience Questionnaire indicated that they satisfied the ‘11 conditions under which assessment supports student learning’. Additionally, evidence is presented that the formative CBAs enhanced self-regulated student learning
Towards Security Requirements in Online Summative Assessments
Confidentiality, integrity and availability (C-I-A) are the security requirements fundamental to any computer system. Similarly, the hardware, software and data are important critical assets. These two components of a computer security framework are entwined; such that a compromise in the C-I-A requirements may lead to a compromise of the critical assets. The C-I-A requirements and the critical assets of a computer system are well researched areas; however they may be insufficient to define the needs of a summative e-assessment system. In this paper, we do not discard the existing components; rather we propose security requirements and related components that are specific to summative e-assessment systems
- …
