23 research outputs found

    Ontology-Based Multiple Choice Question Generation

    Get PDF
    With recent advancements in Semantic Web technologies, a new trend in MCQ item generation has emerged through the use of ontologies. Ontologies are knowledge representation structures that formally describe entities in a domain and their relationships, thus enabling automated inference and reasoning. Ontology-based MCQ item generation is still in its infancy, but substantial research efforts are being made in the field. However, the applicability of these models for use in an educational setting has not been thoroughly evaluated. In this paper, we present an experimental evaluation of an ontology-based MCQ item generation system known as OntoQue. The evaluation was conducted using two different domain ontologies. The findings of this study show that ontology-based MCQ generation systems produce satisfactory MCQ items to a certain extent. However, the evaluation also revealed a number of shortcomings with current ontology-based MCQ item generation systems with regard to the educational significance of an automatically constructed MCQ item, the knowledge level it addresses, and its language structure. Furthermore, for the task to be successful in producing high-quality MCQ items for learning assessments, this study suggests a novel, holistic view that incorporates learning content, learning objectives, lexical knowledge, and scenarios into a single cohesive framework

    The potential of Open Data to automatically create learning resources for smart learning environments

    Get PDF
    Producción CientíficaSmart Education requires bridging formal and informal learning experience. However, how to create contextualized learning resources that support this bridging remains a problem. In this paper, we propose to exploit the open data available in the Web to automatically create contextualized learning resources. Our preliminary results are promising, as our system creates thousands of learning resources related to formal education concepts and physical locations in the student’s local municipality. As part of our future work, we will explore how to integrate these resources into a Smart Learning Environment.Ministerio de Ciencia e Innovación - Fondo Europeo de Desarrollo Regional (grant TIN2017-85179-C3-2-R)Junta de Castilla y León - Fondo Europeo de Desarrollo Regional (grant VA257P18

    7th International Conference on Technological Ecosystems for Enhancing Multiculturality, TEEM 2019

    Get PDF
    Producción CientíficaProceedings of the 7th International Conference on Technological Ecosystems for Enhancing Multiculturality, TEEM 2019, León, Spain, October 16-18, 2019.Smart Education promises personalized learning experiences that bridge formal and informal learning. Our proposal is to exploit the Web of Data to automatically create learning resources that can be, later on, recommended to a learner based on her learning interests and context. For example, a student enrolled in an arts course can get recommendations of learning resources (e.g., a quiz related to a monument she passes by) by exploiting existing geolocalized descriptions of historical buildings in the Web of Data. This paper describes a scenario to illustrate this idea and proposes a software architecture to support it. It also provides some examples of learning resources automatically created with a first prototype of a resource-generator module.Ministerio de Ciencia, Innovación y Univerisades (project grant TIN2017-85179-C3-2-R)Junta de Castilla y León (programa de apoyo a proyectos de investigación - Ref. Project VA257P18

    Measuring Expert Performance at Manually Classifying Domain Entities under Upper Ontology Classes

    Full text link
    Classifying entities in domain ontologies under upper ontology classes is a recommended task in ontology engineering to facilitate semantic interoperability and modelling consistency. Integrating upper ontologies this way is difficult and, despite emerging automated methods, remains a largely manual task. Little is known about how well experts perform at upper ontology integration. To develop methodological and tool support, we first need to understand how well experts do this task. We designed a study to measure the performance of human experts at manually classifying classes in a general knowledge domain ontology with entities in the Basic Formal Ontology (BFO), an upper ontology used widely in the biomedical domain. We conclude that manually classifying domain entities under upper ontology classes is indeed very difficult to do correctly. Given the importance of the task and the high degree of inconsistent classifications we encountered, we further conclude that it is necessary to improve the methodological framework surrounding the manual integration of domain and upper ontologies

    On the Use of Semantic-Based AIG to Automatically Generate Programming Exercises

    Full text link
    In introductory programming courses, proficiency is typically achieved through substantial practice in the form of relatively small assignments and quizzes. Unfortunately, creating programming assignments and quizzes is both, time-consuming and error-prone. We use Automatic Item Generation (AIG) in order to address the problem of creating numerous programming exercises that can be used for assignments or quizzes in introductory programming courses. AIG is based on the use of test-item templates with embedded variables and formulas which are resolved by a computer program with actual values to generate test-items. Thus, hundreds or even thousands of test-items can be generated with a single test-item template. We present a semantic-based AIG that uses linked open data (LOD) and automatically generates contextual programming exercises. The approach was incorporated into an existing self-assessment and practice tool for students learning computer programming. The tool has been used in different introductory programming courses to generate a set of practice exercises different for each student, but with the same difficulty and quality

    An Algorithm for Generating Gap-Fill Multiple Choice Questions of an Expert System

    Get PDF
    This research is aimed to propose an artificial intelligence algorithm comprising an ontology-based design, text mining, and natural language processing for automatically generating gap-fill multiple choice questions (MCQs). The simulation of this research demonstrated an application of the algorithm in generating gap-fill MCQs about software testing. The simulation results revealed that by using 103 online documents as inputs, the algorithm could automatically produce more than 16 thousand valid gap-fill MCQs covering a variety of topics in the software testing domain. Finally, in the discussion section of this paper we suggest how the proposed algorithm should be applied to produce gap-fill MCQs being collected in a question pool used by a knowledge expert system

    Can ChatGPT make reading comprehension testing items on par with human experts?

    Get PDF
    Given the recent increased interest in ChatGPT in the L2 teaching and learning community, the present study sought to examine ChatGPT’s potential as a resource for generating L2 assessment materials on par with those created by human experts. To this end, we extracted five reading passages and testing items in the format of multiple-choice questions from the English section of the College Scholastic Ability Test (CSAT) in South Korea. Additionally, we used ChatGPT to generate another set of readings and testing items in the same format. Next, we developed a survey made up of Likert-scale questions and open-ended response questions that asked about participants’ perceptions of the diverse aspects of the target readings and testing elements. The study’s participants were comprised of 50 pre- and in-service teachers, and they were not informed of the target materials’ source or the study’s purpose. The survey’s results revealed that the CSAT and ChatGPT-developed readings were perceived as similar in terms of naturalness of the target passages’ flow and expressions. However, the former was judged as having included more attractive multiple-choice options, as well as having a higher completion level regarding testing items. Based on such outcomes, we then present implications for L2 teaching and future research

    Design and evaluation of an ontology-based tool for generating multiple-choice questions

    Get PDF
    © 2020 Emerald Publishing Limited. This accepted manuscript is deposited under the Creative Commons Attribution Non-commercial International Licence 4.0 (CC BY-NC 4.0). Any reuse is allowed in accordance with the terms outlined by the licence, here: https://creativecommons.org/licenses/by-nc/4.0/. To reuse the AAM for commercial purposes, permission should be sought by contacting [email protected]: The recent rise in online knowledge repositories and use of formalism for structuring knowledge, such as ontologies, has provided necessary conditions for the emergence of tools for generating knowledge assessment. These tools can be used in a context of interactive computer-assisted assessment (CAA) to provide a cost-effective solution for prompt feedback and increased learner’s engagement. The purpose of this paper is to describe and evaluate a tool developed by the authors, which generates test questions from an arbitrary domain ontology, based on sound pedagogical principles encapsulated in Bloom’s taxonomy. Design/methodology/approach: This paper uses design science as a framework for presenting the research. A total of 5,230 questions were generated from 90 different ontologies and 81 randomly selected questions were evaluated by 8 CAA experts. Data were analysed using descriptive statistics and Kruskal–Wallis test for non-parametric analysis of variance.FindingsIn total, 69 per cent of generated questions were found to be useable for tests and 33 per cent to be of medium to high difficulty. Significant differences in quality of generated questions were found across different ontologies, strategies for generating distractors and Bloom’s question levels: the questions testing application of knowledge and the questions using semantic strategies were perceived to be of the highest quality. Originality/value: The paper extends the current work in the area of automated test generation in three important directions: it introduces an open-source, web-based tool available to other researchers for experimentation purposes; it recommends practical guidelines for development of similar tools; and it proposes a set of criteria and standard format for future evaluation of similar systems.Peer reviewedFinal Accepted Versio

    Automatic Generation of Educational Quizzes from Domain Ontologies

    Get PDF
    International audienceEducational quizzes are very valuable resources to test or evaluate the knowledge acquired by learners and to support lifelong learning on various topics or subjects, in an informal and entertaining way. The production of quizzes is a very time-consuming task and its automation is thus a real challenge in e-Education. In this paper, we address the research question of how to automate the generation of quizzes by taking advantage of existing knowledge sources available on the Web. We propose an approach that allows learners to take advantage of the knowledge captured in domain ontologies available on the Web and to discover or acquire a more in-depth knowledge of a specific domain by solving educational quizzes automatically generated from an ontology modelling the domain. The implementation and experimentation of our approach is presented through the use case of a world-famous French game of manually generated multiple-choice questions
    corecore