International Journal of e-Assessment (IJEA)
Not a member yet
64 research outputs found
Sort by
The investigation of learner-assessment interaction in learning management systems
Interactions play a fundamental role in distance education. The interactions occurring between learner and content; learner and learner; and learner and instructor are key interactions often studied in literature and have significant impact on academic achievement in different contexts. In addition, self-assessment is an important component of learning environments, with potential impact on the achievement of the learner. This study aimed to investigate the interactions between learners and self-assessment within a learning management system. e-Learning behaviours, that the learners displayed in a learning management system within the scope of Computer Networks and Communication course, was recorded for six weeks. Explanatory factor analysis was applied to the obtained e-learning behaviour variables. This suggested that the e-learning behaviours in this setting divided into five factors. The factors were named according to the characteristics of the e-learning behaviours: learner-content, learner-learner, learner-instructor, learner-assessment and learner-feedback interactions. The impact of these interaction variables on the development of the achievement was examined, and learner-instructor and learner-assessment interactions were found to be significant
The challenges of researching digital technology use: examples from an assessment context
This paper presents a critical overview of research studies that have considered how people interact with and through new digital technology for assessment purposes. These studies involved a variety of people interacting with digital technology in different parts of the assessment system (from examination question setters writing questions, to young students answering test questions, to examiners marking extended essays), and were all underpinned by a cognitivist perspective. This perspective anticipates that a shift in mode (i.e. a move from interaction in a non-digital to a digital environment) influences the ways that individual’s think and, in turn, behave. Research has a central role in ensuring that a valid conceptualisation of the intended technology user is maintained throughout the development process. As a consequence, this paper identifies three principal challenges that need to be considered when research is used to support technological development, with these challenges being particularly apposite in contexts where quick technological development is incentivised
A Methodology for Generating Items in Three or More Languages Using Automated Processes
Educational and psychological tests are administered to examinees in different languages across different cultures throughout the world. The challenges inherent to translating and adapting multilingual and multicultural assessment are enormous. The purpose of this paper is to describe and illustrate a new methodology that can be used to generate items in multiple languages. The method is presented as a three-stage process where, first, context translation begins when the context of the model required for item generation is translated or adapted appropriately for each language group; second, words and key phrases are translated; third, content assembly occurs where computer algorithms place the words and key phrases into the context-specific item model. Then, we demonstrate how the method can be applied to a diverse sample of item models in math and science to generate thousands of multilingual test items. Finally, results are presented from a substantive review designed to evaluate item quality which revealed that 91% of the generated items were judged to be acceptable by two bilingual test development specialists. Directions for future research are also presented
An exploration of the features of graded student essays using domain-independent natural language processing techniques
This paper presents observations that were made about a corpus of 135 graded student essays by analysing them with a computer program that we are designing to provide automated formative feedback on draft essays. In order to provide individualised feedback to help students to improve their essays, the program carries out automatic essay structure recognition and uses domain-independent graph-based ranking techniques to derive extractive summaries. These procedures generate data concerning an essay’s organisational structure and its discourse structure. We have selected 27 attributes from the data and used them in a comparative analysis of all the essays with a view to informing further development of the feedback program. The results of this analysis suggest that some characteristics of students’ essays that our domain-independent feedback program is measuring may be related to the grades that tutors assign to their essays
Student engagement with topic-based facilitative feedback on e-assessments
This three year study investigates how undergraduate students engage with topic-based formative feedback on e-assessments consisting of multiple choice and extended matching questions. After submitting the assessment, the student does not receive directive feedback on individual questions, but instead they are shown diagnostic facilitative feedback on the different subject topic areas covered in the test. The study looks into student engagement with this type of topic-based feedback: engagement is measured in terms of time commitment, number of questions answered, and the distribution of timing of the student effort. Through quantitative analysis of three years of student data, the paper explores whether there is evidence of different engagement patterns between the stronger and weaker students, as measured by performance on the subsequent summative module examination. The paper concludes that there is evidence that the more successful students did engage with the formative assessments significantly more than the mid-ranking students, and the least successful students engaged least of all. Qualitative questionnaire data also indicate positive student attitudes towards this kind of feedback and suggest that the feedback is mostly used to evaluate the revision process
Assessment item generation, the way forward
Automatic item generation (AIG) supports in particular computerised adaptive testing, although it is now used in other contexts, in particular to extract items from learning texts or domain models. Current template-based AIG approaches initially focused on a limited number of domains in which quantitative variables were resolved with limited risks of errors. Generalising AIG approaches as a mainstream technology for diagnostic, summative and even for formative assessment requires processing a richer set of variables. Such variables define the variable parts of test items, for instance, in the stem, options or auxiliary information. This paper discusses current approaches to this problem which includes adding multimedia variables to items, adding variables for feedback elements as well as defining new scalable models for measuring the quality of the generated items. Open Educational Resources and semantic models published on the web represent potential sources for generating those variables. The topics discussed in this paper (including concrete methods, techniques, tools) can support the transformation of the AIG field with the generation of a wider range of items and more standardised processes
The use, role and reception of open badges as a method for formative and summative reward in two Massive Open Online Courses
Open online learning courses such as cMOOCs and xMOOCs differ from conventional courses yet it remains uncertain how, and if, existing common yet costly practices associated with teacher-driven formative and summative assessment strategies can be made to work in this new context. For courses that carry no charge for registration or participation, authors of open online courses have to consider alternative approaches to engaging, motivating and sustaining study and for helping participants manage, plan and demonstrate their own learning. One such approach is that of open badges or similar such visual public symbols that communicate to others a particular quality, achievement or affiliation possessed by the owner. This paper reports the role, reception and use of open badges in two ‘massive’ open online courses delivered in 2013 with attention to varied functions of badges and the a distinction between formative and summative applications. The paper will then draw upon data from end of course surveys, which specifically asked about badges, pre-course surveys, and user comments made during the course on platforms such as Twitter to examine what value participants ascribed to the open badges. Although there was found to be a broadly positive response to badges in both MOOCs, the reasons for this were often very different, and approximately a quarter of respondents remained sceptical or concerned about their role. The paper concludes by reflecting on the open badge as a formative instrument for providing the learner with an indication of progress and achievement
e-Assessment: past, present and future
The goal of this paper is to provide an overview of the evolution of electronic assessment, or e-assessment, within the context of a developing e-pedagogy by investigating the changes occurring over time in the ways e-pedagogy is described. A historical review of behaviorist and constructivist learning theories first identifies elements common to pedagogies based upon these theories. Using an analogy with genetic markers, these elements (instruction, teaching, learning, assessment, and testing) are combined with specific electronic resources and functions (computer assisted/aided, computer-based, web-based, e-, and online) to form what the paper identifies as e-markers such as computer-assisted learning, web-based instruction, or e-assessment. These e-markers, in turn, provide the basis for tracing the history of e-pedagogy from the years 1975 to 2012. A meta-narrative approach, adapted to address the paper’s goal, then utilises e-marker frequency distributions resulting from abstract searches of the literature to trace the development of e-assessment as part of an evolving e-pedagogy. In particular, the narrative suggests that the initial model for e-pedagogy was a behaviorist learning environment which, as a result of technology providing a greater variety of tools, subsequently gave way to the present model – a constructivist learning environment. Application of the Rogers’ Diffusion of Innovation Theory provides a means to assess the future of a constructivist e-learning environment as a model e-pedagogy. By investigating the relative advantage, compatibility, complexity, trialability, and observability, of this model, the paper concludes that a more rigorous constructivist theory of teaching and learning is necessary if constructivist e-learning environments are to gain greater institutional acceptance. The paper concludes with a call for continued research related to the rising frequency e-markers such as mobile assessment and for future research directed at describing the pedagogical elements required by emerging massive open online courses
Integrating competence models with knowledge space theory for assessment
A model for the interactions in an assessment to support learning identifies the need for response options and for contingent feedback, both of which pose problems when computer-aided. The “Knowledge Space Theory (KST)” model of the domain “problems” provides some opportunity for response options. The “Competence Based Assessment (COMBA)” model of the required knowledge provides some opportunity for relevant feedback. KST has been developed to incorporate skills and competencies and the result is called Competence Knowledge Space (ComKos). The assignment of skills to learning objects allows the realisation of a personalised learning path by selecting appropriate learning objects given a learner's competence state and learning goal. The paper explores ComKoS, a model which integrates both approaches, and identifies key benefits and some disadvantages. This model is provided for domain and learner knowledge representation which constitutes the basis for meaningful learning paths adapted to the learners’ knowledge state. We introduce IMS QTI and its application for managing, verifying, and delivering e-assessments. This paper proposes an implementation of ComKos within Moodle which delivers assessments using an IMS QTI-compliant application. This implementation enhances adaptability, interoperability, portability, and reusability of e-assessments. Key benefits of the implementation are identified and suggestions for future work are provided