5,889 research outputs found

    An Algorithm for Generating Gap-Fill Multiple Choice Questions of an Expert System

    Get PDF
    This research is aimed to propose an artificial intelligence algorithm comprising an ontology-based design, text mining, and natural language processing for automatically generating gap-fill multiple choice questions (MCQs). The simulation of this research demonstrated an application of the algorithm in generating gap-fill MCQs about software testing. The simulation results revealed that by using 103 online documents as inputs, the algorithm could automatically produce more than 16 thousand valid gap-fill MCQs covering a variety of topics in the software testing domain. Finally, in the discussion section of this paper we suggest how the proposed algorithm should be applied to produce gap-fill MCQs being collected in a question pool used by a knowledge expert system

    Design and evaluation of an ontology-based tool for generating multiple-choice questions

    Get PDF
    © 2020 Emerald Publishing Limited. This accepted manuscript is deposited under the Creative Commons Attribution Non-commercial International Licence 4.0 (CC BY-NC 4.0). Any reuse is allowed in accordance with the terms outlined by the licence, here: https://creativecommons.org/licenses/by-nc/4.0/. To reuse the AAM for commercial purposes, permission should be sought by contacting [email protected]: The recent rise in online knowledge repositories and use of formalism for structuring knowledge, such as ontologies, has provided necessary conditions for the emergence of tools for generating knowledge assessment. These tools can be used in a context of interactive computer-assisted assessment (CAA) to provide a cost-effective solution for prompt feedback and increased learner’s engagement. The purpose of this paper is to describe and evaluate a tool developed by the authors, which generates test questions from an arbitrary domain ontology, based on sound pedagogical principles encapsulated in Bloom’s taxonomy. Design/methodology/approach: This paper uses design science as a framework for presenting the research. A total of 5,230 questions were generated from 90 different ontologies and 81 randomly selected questions were evaluated by 8 CAA experts. Data were analysed using descriptive statistics and Kruskal–Wallis test for non-parametric analysis of variance.FindingsIn total, 69 per cent of generated questions were found to be useable for tests and 33 per cent to be of medium to high difficulty. Significant differences in quality of generated questions were found across different ontologies, strategies for generating distractors and Bloom’s question levels: the questions testing application of knowledge and the questions using semantic strategies were perceived to be of the highest quality. Originality/value: The paper extends the current work in the area of automated test generation in three important directions: it introduces an open-source, web-based tool available to other researchers for experimentation purposes; it recommends practical guidelines for development of similar tools; and it proposes a set of criteria and standard format for future evaluation of similar systems.Peer reviewedFinal Accepted Versio

    Automatic Distractor Generation for Multiple Choice Questions in Standard Tests

    Full text link
    To assess the knowledge proficiency of a learner, multiple choice question is an efficient and widespread form in standard tests. However, the composition of the multiple choice question, especially the construction of distractors is quite challenging. The distractors are required to both incorrect and plausible enough to confuse the learners who did not master the knowledge. Currently, the distractors are generated by domain experts which are both expensive and time-consuming. This urges the emergence of automatic distractor generation, which can benefit various standard tests in a wide range of domains. In this paper, we propose a question and answer guided distractor generation (EDGE) framework to automate distractor generation. EDGE consists of three major modules: (1) the Reforming Question Module and the Reforming Passage Module apply gate layers to guarantee the inherent incorrectness of the generated distractors; (2) the Distractor Generator Module applies attention mechanism to control the level of plausibility. Experimental results on a large-scale public dataset demonstrate that our model significantly outperforms existing models and achieves a new state-of-the-art.Comment: accepted by COLING202

    An Intelligent Approach to Automatic Query Formation from Plain Text using Artificial Intelligence

    Get PDF
    Man have always been, inherently, curious creatures. They ask questions in order to satiate their insatiable curiosity. For example, kids ask questions to learn more from their teachers, teachers ask questions to assist themselves to evaluate student performance, and we all ask questions in our daily lives. Numerous learning exchanges, ranging from one-on-one tutoring sessions to thorough exams, as well as real-life debates, rely heavily on questions. One notable fact is that, due to their inconsistency in particular contexts, humans are often inept at asking appropriate questions. It has been discovered that most people have difficulty identifying their own knowledge gaps. This becomes our primary motivator for automating question generation in the hopes that the benefits of an automated Question Generation (QG) system will help humans achieve their useful inquiry needs. QG and Information Extraction (IE) have become two major issues for language processing communities, and QG has recently become an important component of learning environments, systems, and information seeking systems, among other applications. The Text-to-Question generation job has piqued the interest of the Natural Language Processing (NLP), Natural Language Generation (NLG), Intelligent Tutoring System (ITS), and Information Retrieval (IR) groups as a possible option for the shared task. A text is submitted to a QG system in the Text-to-Question generation task. Its purpose would be to create a series of questions for which the text has answers (such as a word, a set of words, a single sentence, a text, a set of texts, a stretch of conversational dialogue, an inadequate query, and so on)

    The Relationship between Testing Frequency and Student Achievement in Eighth-Grade Mathematics: An International Comparative Study Based on TIMSS 2011

    Get PDF
    The purpose of this study was to examine the relationship between quiz frequency and student achievement in eighth-grade mathematics as measured by TIMSS. The more specific goal of the study was determining the best quiz frequency (daily, weekly, monthly, no quizzes) and student achievement relationship for an eighth-grade mathematics course. The study investigated the above-mentioned relationship in all of the eighth-grade of participant countries combined, as well as in four specific countries: Korea, Singapore, Turkey, and the United States. Another goal of the study was to determine high performing and low performing countries’ quizzing practices, and to determine the best relationship of quiz frequency and student achievement in these countries. The study obtained data from the TIMSS 2011 exam and from student, teacher, and school questionnaires. In addition to quiz practices, students’ and schools’ SES data were also used in this study as control variables. Quiz frequency data (independent variable) were retrieved from teacher questionnaires, socioeconomic status (SES) data (control variables) were retrieved from student and school questionnaires, and student achievement data were retrieved from the TIMSS 2011 exam. Several multiple linear regressions were performed to determine whether quiz frequency is a significant predictor of student achievement in all countries combined, as well as in individual countries. Regression results indicated that quizzing frequency is not a significant contributor to student achievement in eighth-grade mathematics, either in all countries combined or in individual countries after controlling for SES variables. Furthermore, regression results indicated that weekly quizzes had the best relationship in all countries, monthly quizzes in the top two performing countries (Korea and Singapore), and daily quizzes in Turkey and the United States. Results also indicated that almost all teachers use quizzes. Moreover, the study also found that SES status is a significant contributor to student achievement, and that student achievement significantly and constantly increased as student SES status improve

    Ontologies for automatic question generation

    Get PDF
    Assessment is an important tool for formal learning, especially in higher education. At present, many universities use online assessment systems where questions are entered manually into a question bank system. This kind of system requires the instructor’s time and effort to construct questions manually. The main aim of this thesis is, therefore, to contribute to the investigation of new question generation strategies for short/long answer questions in order to allow for the development of automatic factual question generation from an ontology for educational assessment purposes. This research is guided by four research questions: (1) How well can an ontology be used for generating factual assessment questions? (2) How can questions be generated from course ontology? (3) Are the ontological question generation strategies able to generate acceptable assessment questions? and (4) Do the topic-based indexing able to improve the feasibility of AQGen. We firstly conduct ontology validation to evaluate the appropriateness of concept representation using a competency question approach. We used revision questions from the textbook to obtain keyword (in revision questions) and a concept (in the ontology) matching. The results show that only half of the ontology concepts matched the keywords. We took further investigation on the unmatched concepts and found some incorrect concept naming and later suggest a guideline for an appropriate concept naming. At the same time, we introduce validation of ontology using revision questions as competency questions to check for ontology completeness. Furthermore, we also proposed 17 short/long answer question templates for 3 question categories, namely definition, concept completion and comparison. In the subsequent part of the thesis, we develop the AQGen tool and evaluate the generated questions. Two Computer Science subjects, namely OS and CNS, are chosen to evaluate AQGen generated questions. We conduct a questionnaire survey from 17 domain experts to identify experts’ agreement on the acceptability measure of AQGen generated questions. The experts’ agreements for acceptability measure are favourable, and it is reported that three of the four QG strategies proposed can generate acceptable questions. It has generated thousands of questions from the 3 question categories. AQGen is updated with question selection to generate a feasible question set from a tremendous amount of generated questions before. We have suggested topic-based indexing with the purpose to assert knowledge about topic chapters into ontology representation for question selection. The topic indexing shows a feasible result for filtering question by topics. Finally, our results contribute to an understanding of ontology element representation for question generations and how to automatically generate questions from ontology for education assessment

    A misleading answer generation system for exam questions

    Get PDF
    University professors are responsible for teaching and grading their students in each semester. Normally, in order to evaluate the students progress, professors create exams that are composed of questions regarding the subjects taught in the teaching period. Each year, professors need to develop new questions for their exams since students are free to discuss and register the correct answers to the various questions on prior exams. Professors want to be able to grade students based on their knowledge and not on their memorization skills. Each year, as discovered by our research, professors spend over roughtly 2:30 hours each year for a single course only on multiple answer questions sections. This solution will have at its core a misleading answer generator that would reduce the time and effort when creating a Fill Gap Type Questions through the merger of highly biased lexical model towards a specific subject with a generalist model. To help the most amount of professors with this task a web-server was implemented that served as an access to a exam creator interface with the misleading answer generator feature. To implement the misleading answer generator feature, several accessory programs had to be created as well as manually edditing textbooks pertaining to the question base topic. To evaluate the effectiveness of our implementation, several evaluation methods were proposed composed of objective measurements of the misleading answers generator, as well as subjective methods of evaluation by expert input. The development of the misleading answer suggestion function required us to build a lexical model composed from a highly biased corpus in a specific curricular subject. A highly biased model is probable to give good in-context misleading answers but their variance would most likely be limited. To counteract this the model was merged with a generalist model, in hopes of improving its overall performance. With the development of the custom lexical model and the server the professor can receive misleading answers suggestions to a newly formed question reducing the time spent on creating new exams questions each year to assess students’ knowledge
    corecore