170 research outputs found
Key Factors for Effective Organisation of e-Assessment
The benefits of e-assessment are widely documented (Bull and McKenna 2004). However, instances of good practice have not been systematically reported. Recognising and acknowledging this gap in the research, the JISC Organisational Committee has funded a number of projects on e-assessment practice: 'E-Assessment Glossary?, 'The Roadmap to E-Assessment? together with a set of case studies of innovative and effective practice.
This paper is based on the findings of the JISC Case Study Project 'The innovative and effective use of E-Assessment'. Members of the project team conducted over 90 interviews with teaching staff, senior management, developers and students to showcase all aspects of e-assessment. The project offered a unique opportunity to observe different organisational structures and gain inside-information about the effectiveness of a number of different applications. The 17 case studies and their follow-up surveys have been studied to identify the facilitating factors for the introduction of e-assessment and the organisational structures supporting e-assessment have also been investigated. The focus of this analysis was to study the different organisational structures and to identify patterns herein.
We suggest that the key characteristics for the typology are the position of the e-assessment within the organisational structure and the support from the senior management. Three types of organisational structures are identified by the study, which support innovative practice. These are the Central Team, the Faculty based Team and the Departmental Champion.
The Central Team offers e-assessment support and, in some cases, production services to all academics on a university-wide basis whilst the Faculty Based Team provides a more limited discipline-related service. The Departmental Champion usually implements e-assessment within his/her specific discipline and may be an early adopter or have a special interest in this area
Interfaith Awareness: A New Model for the Future of the Practice of Ministry
Don Mackenzie shares his story of interfaith dialogue, what he learned through that dialogue and his vision for interfaith ministries in the future
The importance of controlling item behaviour and scoring methodology for testing higher level learning
When testing higher level learning it is important to ensure that
e-assessment is as reliable and as rigorous as possible and that guessfactors
are reduced to the minimum in order to be confident that the final
score is a true reflection of the candidate’s ability.
A number of assessment systems, such as those provided by VLEs, provide
only basic controls on question behaviour and scoring together with relatively
simple ‘click & pick’ item types. For more advanced e-assessment it is
necessary to employ a dedicated e-assessment system that allows fine
control of how each item is delivered and flexible scoring methodologies that
allow for the award of partial credit for partially correct answers and stepwise
accreditation of the route to the answer even though the final answer may not
be correct
Identifying innovative and effective practice in e-assessment: findings from seventeen UK case studies
The aim of this JISC funded project was to extend the understanding of what e-assessment meant to users and producers in the HE and FE sectors. A case study methodology was employed to identify and report upon best and current practice within this field of inquiry. This approach facilitated the identification of both the enabling factors and barriers associated with e-assessment.
The variety of applications of e-assessment studied and their innovation and general effectiveness indicate the potential of e-assessment to significantly enhance the learning environment and the outcomes for students, in a wide range of disciplines and applications
QuickTrl a visual system for the rapid creation of e-assessments and e-learning materials
QuickTrI is a visual graphical interface for the creation of e-assessments delivered by an enhanced version of the powerful TRIADS Professional assessment engine
Empirical prediction of the measurement scale and base Level ‘Guess factor’ for advanced computer-based assessments
In our experience, insufficient consideration is often given to the way in which the questions in computer-based assessments are scored. The advent of more complex question-styles such as those delivered by the TRIADSystem (Mackenzie, 1999) has made it much more difficult to predict the distribution of possible scores and the base level guess factor than it has been for tests containing simple multiple-choice questions. For example the TRIADS drag and drop template allows each object to be allocated a different score (positive or negative) for each position as well as allowing dummy objects and dummy positions to be defined. The number of score possibilities for a random answer increases dramatically as the number of objects and positions is increased and although a 0 to 100 scoring scale is available, scores are likely to be concentrated about ‘nodes’ on this measurement scale. The positions of these ‘nodes’ will vary with the structure of the question and negative or penalty scoring may serve to ‘smear’ the mark distribution between ‘nodes’. Many tutors may find it difficult to predict the guess factor and will not appreciate the effect that the structure of the question may have on the range and distribution of final scores achieved.
In order to demonstrate to test designers the effects of question structure and score allocation on the ‘guess factor’ and mark distributions, we are developing an empirical Marking Simulator. This program allows test designers/tutors to select a question type, enter the proposed structure and scores for each question then view the mark distribution and measurement scale that would result from a set of entirely random answers.
Use of the marking simulator should result in a more realistic setting of pass levels and generally enhance the quality of computer-based assessments
QuickTrI and Intelligent Shell System (ISS) from Innovation 4 Learning building on practical experience
The increasing amount of commercial work being undertaken by the Centre
for Interactive Assessment Development (CIAD) and Interactive Media Unit
(IMU) at the University of Derby has promoted the separation of commercial
activities from the day-to-day academic work of those departments.
This has resulted in the development of a new commercial e-business division
within the University, Innovation 4 Learning (i4L). Here a core team derived
from both CIAD and IMU have joined with commercial professionals to focus
on bespoke applications for clients in the corporate, health and educational
sectors initially together with the development of two core products
Bioscope: the assessment of process and outcomes using the TRIAD system
An interactive assessment using a simulated biological microscope to test process as well as outcomes is described. The initial results of teacher and student feedback from trials in international schools are reported and evaluated
Don MacKenzie -Massachusetts Institute of Technology
Abstract Corporate Average Fuel Economy (CAFE) standards are popularly presented as technology-forcing regulations, which will accelerate the deployment of efficiencyimproving technologies in new automobiles. It is not clear, however, that such regulations actually spur more advanced technology; fuel economy gains may also come through sacrificing other vehicle attributes, such as acceleration performance, size, or features. In this paper I test whether a binding CAFE regulation increases the rate of technology deployment within a firm's fleet of automobiles. I build on recent applications of a product characteristics framework to quantify technological change in automobiles. I use fleet and firm-level regulatory compliance data to identify changes in the rate of technology improvements when a manufacturer's fleet is more tightly constrained by a CAFE standard. In a variety of panel regression specifications, I found little to no significant change in the the rate of technology improvement when fleets were more tightly constrained by a CAFE standard. This was the case for both technology improvement among the menu of cars offered for sale, and the sales-weighted mix of all cars sold. The failure to find a significant effect of the standards on technology change does not preclude the possibility of such an effect, and I discuss several limitations to my results in this paper
Effect of trip attributes on ridehailing driver trip request acceptance
A generalized additive mixed model was estimated to investigate the factors
that impact ridehailing driver trip request acceptance choices, relying on 200
responses from a stated preference survey in Seattle, US. Several policy
recommendations were proposed to promote trip request acceptance based on
ridehailing drivers willingness to accept compensation for undesired trip
features. The findings could be useful for transportation agencies to improve
ridehailing service efficiency, better fulfill urban mobility needs, and reduce
environmental burden.Comment: Paper in print at Journal of Sustainable Transportatio
- …