27 research outputs found
Validity Arguments for Diagnostic Assessment Using Automated Writing Evaluation
Two examples demonstrate an argument-based approach to validation of diagnostic assessment using automated writing evaluation (AWE). Criterion ®, was developed by Educational Testing Service to analyze students’ papers grammatically, providing sentence-level error feedback. An interpretive argument was developed for its use as part of the diagnostic assessment process in undergraduate university English for academic purposes (EAP) classes. The Intelligent Academic Discourse Evaluator (IADE) was developed for use in graduate EAP university classes, where the goal was to help students improve their discipline-specific writing. The validation for each was designed to support claims about the intended purposes of the assessments. We present the interpretive argument for each and show some of the data that have been gathered as backing for the respective validity arguments, which include the range of inferences that one would make in claiming validity of the interpretations, uses, and consequences of diagnostic AWE-based assessments
Blended versus face-to-face: comparing student performance in a therapeutics class
Therapeutics is a very complex subject for every pharmacy student, since it requires the application of knowledge from several other disciplines. The study of therapeutics is often done in case-based learning in order to promote reflective thinking and give a scenario as real as possible. The objective of this study was to compare student performance between faceto-face (n = 54) and blended learning (n = 56) approaches to the teaching of therapeutics. They can confirm that there are statistically significant differences (p < 0.05) between the final exam scores from both groups, being that the b learning group achieved higher scores. Blended learning seems to be an effective way to teach therapeutics, following pre established teaching methods, and above all, does not negatively affect student performance. It also provides new learning environments and strategies, and promotes the development of new skills such as learning and collaborating online, which may be relevant in a networked knowledge society.info:eu-repo/semantics/publishedVersio
A Perspective on Computer Assisted Assessment Techniques for Short Free-Text Answers
Computer Assisted Assessment (CAA) has been existing for several years now. While some forms of CAA do not require sophisticated text understanding (e.g., multiple choice questions), there are also student answers that consist of free text and require analysis of text in the answer. Research towards the latter till date has concentrated on two main sub-tasks: (i) grading of essays, which is done mainly by checking the style, correctness of grammar, and coherence of the essay and (ii) assessment of short free-text answers. In this paper, we present a structured view of relevant research in automated assessment techniques for short free-text answers. We review papers spanning the last 15 years of research with emphasis on recent papers. Our main objectives are two folds. First we present the survey in a structured way by segregating information on dataset, problem formulation, techniques, and evaluation measures. Second we present a discussion on some of the potential future directions in this domain which we hope would be helpful for researchers