27 research outputs found

    Learning Technology Systems:issues, trends, challenges

    Get PDF

    Learning Technology Systems:issues, trends, challenges

    Get PDF

    The Learner’s Mirror. Designing a User Modelling Component in Adaptive Hypermedia Educational Systems

    Get PDF

    Automating Standards-Based Courseware Development Using UML

    Full text link

    On the Use of Semantic-Based AIG to Automatically Generate Programming Exercises

    Full text link
    In introductory programming courses, proficiency is typically achieved through substantial practice in the form of relatively small assignments and quizzes. Unfortunately, creating programming assignments and quizzes is both, time-consuming and error-prone. We use Automatic Item Generation (AIG) in order to address the problem of creating numerous programming exercises that can be used for assignments or quizzes in introductory programming courses. AIG is based on the use of test-item templates with embedded variables and formulas which are resolved by a computer program with actual values to generate test-items. Thus, hundreds or even thousands of test-items can be generated with a single test-item template. We present a semantic-based AIG that uses linked open data (LOD) and automatically generates contextual programming exercises. The approach was incorporated into an existing self-assessment and practice tool for students learning computer programming. The tool has been used in different introductory programming courses to generate a set of practice exercises different for each student, but with the same difficulty and quality

    Evaluating the quality of the ontology-based auto-generated questions

    Get PDF
    An ontology is a knowledge representation structure which has been used in Virtual Learning Environments (VLEs) to describe educational courses by capturing the concepts and the relationships between them. Several ontology-based question generators used ontologies to auto-generate questions, which aimed to assess students' at different levels in Bloom's taxonomy. However, the evaluation of the questions was confined to measuring the qualitative satisfaction of domain experts and students. None of the question generators tested the questions on students and analysed the quality of the auto-generated questions by examining the question's difficulty, and the question's ability to discriminate between high ability and low ability students. The lack of quantitative analysis resulted in having no evidence on the quality of questions, and how the quality is a�affected by the ontology-based generation strategies, and the level of question in Bloom's taxonomy (determined by the question's stem templates). This paper presents an experiment carried out to address the drawbacks mentioned above by achieving two objectives. First, it assesses the auto-generated questions' difficulty, discrimination, and reliability using two statistical methods: Classical Test Theory (CTT) and Item Response Theory (IRT). Second, it studies the effect of the ontology-based generation strategies and the level of the questions in Bloom's taxonomy on the quality of the questions. This will provide guidance for developers and researchers working in the field of ontology-based question generators, and help building a prediction model using machine learning techniques

    High-level authoring of Simple Sequencing descriptions

    No full text
    corecore