5,729 research outputs found

    Chapter 4: New Assessment Methods

    Get PDF
    The OTiS (Online Teaching in Scotland) programme, run by the now defunct Scotcit programme, ran an International e-Workshop on Developing Online Tutoring Skills which was held between 8–12 May 2000. It was organised by Heriot–Watt University, Edinburgh and The Robert Gordon University, Aberdeen, UK. Out of this workshop came the seminal Online Tutoring E-Book, a generic primer on e-learning pedagogy and methodology, full of practical implementation guidelines. Although the Scotcit programme ended some years ago, the E-Book has been copied to the SONET site as a series of PDF files, which are now available via the ALT Open Access Repository. The editor, Carol Higgison, is currently working in e-learning at the University of Bradford (see her staff profile) and is the Chair of the Association for Learning Technology (ALT)

    Evaluating a Personal Learning Environment for Digital Storytelling

    Get PDF
    The evaluation of flexible and personal learning environments is extremely challenging. It should not be limited to the assessment of products, but should address the quality of educative experience with close monitoring. The evaluation of a PLE using digital storytelling is even more complicated, due to the unpredictability of the usage scenarios. This paper presents an evaluation methodology for PLEs using digital storytelling, using a participatory design approach. The results from an open validation trial indicate that this methodology is able to incorporate all necessary factors and that the selected evaluation tools are appropriate for addressing the quality of educative experience

    ALT-C 2010 - Conference Introduction and Abstracts

    Get PDF

    Assessment @ Bond

    Get PDF

    The problem of labels in e-assessment of diagrams

    Get PDF
    In this short paper we explore a problematic aspect of automated assessment of diagrams. Diagrams have partial and sometimes inconsistent semantics. Typically much of the meaning of diagram resides in the labels, however, the choice of labeling is largely unrestricted. This means a correct solution may utilise differing yet semantically equivalent labels to the specimen solution. With human marking this problem can be easily overcome. Unfortunately with e-assessment this is challenging. We empirically explore the scale of the problem of synonyms by analysing 160 student solutions to a UML task. From this we find that cumulative growth of synonyms only shows a limited tendency to reduce at the margin. This finding has significant implications for the ease in which we may develop future e-assessment systems of diagrams, in that the need for better algorithms for assessing label semantic similarity becomes inescapable
    • …
    corecore