8 research outputs found
Knowledge Questions from Knowledge Graphs
We address the novel problem of automatically generating quiz-style knowledge questions from a knowledge graph such as DBpedia. Questions of this kind have ample applications, for instance, to educate users about or to evaluate their knowledge in a specific domain. To solve the problem, we propose an end-to-end approach. The approach first selects a named entity from the knowledge graph as an answer. It then generates a structured triple-pattern query, which yields the answer as its sole result. If a multiple-choice question is desired, the approach selects alternative answer options. Finally, our approach uses a template-based method to verbalize the structured query and yield a natural language question. A key challenge is estimating how difficult the generated question is to human users. To do this, we make use of historical data from the Jeopardy! quiz show and a semantically annotated Web-scale document collection, engineer suitable features, and train a logistic regression classifier to predict question difficulty. Experiments demonstrate the viability of our overall approach
Learning to Reuse Distractors to support Multiple Choice Question Generation in Education
Multiple choice questions (MCQs) are widely used in digital learning systems,
as they allow for automating the assessment process. However, due to the
increased digital literacy of students and the advent of social media
platforms, MCQ tests are widely shared online, and teachers are continuously
challenged to create new questions, which is an expensive and time-consuming
task. A particularly sensitive aspect of MCQ creation is to devise relevant
distractors, i.e., wrong answers that are not easily identifiable as being
wrong. This paper studies how a large existing set of manually created answers
and distractors for questions over a variety of domains, subjects, and
languages can be leveraged to help teachers in creating new MCQs, by the smart
reuse of existing distractors. We built several data-driven models based on
context-aware question and distractor representations, and compared them with
static feature-based models. The proposed models are evaluated with automated
metrics and in a realistic user test with teachers. Both automatic and human
evaluations indicate that context-aware models consistently outperform a static
feature-based approach. For our best-performing context-aware model, on average
3 distractors out of the 10 shown to teachers were rated as high-quality
distractors. We create a performance benchmark, and make it public, to enable
comparison between different approaches and to introduce a more standardized
evaluation of the task. The benchmark contains a test of 298 educational
questions covering multiple subjects & languages and a 77k multilingual pool of
distractor vocabulary for future research.Comment: 24 pages and 4 figures Accepted for publication in IEEE Transactions
on Learning technologie
PolyAQG Framework: Auto-generating assessment questions
Designing and setting assessment questions for
examinations is always a necessary task for educators. In this article, we identify the research gaps in (semi-) automatically generating questions by evaluating all the available approaches developed thus far. We then propose a framework that puts together previous approaches and suggests ways to fill in their gaps. One hundred and thirteen pieces of literature relevant to question generation approaches have been reviewed and compared. For each of the approaches, the uniqueness of the techniques is explained. The PolyAQG Framework is presented with an explanation of how it would contribute to the solution of the problem, by improving the variety of the questions, increasing the total number of possible choices of question selections, as well as providing a better quality of questions. Apart from the framework, another novelty in this work is the innovative way a domain ontology is used to generate a wider variety of questions
PolyAQG Framework: Auto-generating assessment questions
Designing and setting assessment questions for
examinations is always a necessary task for educators. In this article, we identify the research gaps in (semi-) automatically generating questions by evaluating all the available approaches developed thus far. We then propose a framework that puts together previous approaches and suggests ways to fill in their gaps. One hundred and thirteen pieces of literature relevant to question generation approaches have been reviewed and compared. For each of the approaches, the uniqueness of the techniques is explained. The PolyAQG Framework is presented with an explanation of how it would contribute to the solution of the problem, by improving the variety of the questions, increasing the total number of possible choices of question selections, as well as providing a better quality of questions. Apart from the framework, another novelty in this work is the innovative way a domain ontology is used to generate a wider variety of questions
A Survey on Knowledge Graphs: Representation, Acquisition and Applications
Human knowledge provides a formal understanding of the world. Knowledge
graphs that represent structural relations between entities have become an
increasingly popular research direction towards cognition and human-level
intelligence. In this survey, we provide a comprehensive review of knowledge
graph covering overall research topics about 1) knowledge graph representation
learning, 2) knowledge acquisition and completion, 3) temporal knowledge graph,
and 4) knowledge-aware applications, and summarize recent breakthroughs and
perspective directions to facilitate future research. We propose a full-view
categorization and new taxonomies on these topics. Knowledge graph embedding is
organized from four aspects of representation space, scoring function, encoding
models, and auxiliary information. For knowledge acquisition, especially
knowledge graph completion, embedding methods, path inference, and logical rule
reasoning, are reviewed. We further explore several emerging topics, including
meta relational learning, commonsense reasoning, and temporal knowledge graphs.
To facilitate future research on knowledge graphs, we also provide a curated
collection of datasets and open-source libraries on different tasks. In the
end, we have a thorough outlook on several promising research directions
Evaluating the quality of the ontology-based auto-generated questions
An ontology is a knowledge representation structure which has been used in Virtual Learning Environments (VLEs) to describe educational courses by capturing the concepts and the relationships between them. Several ontology-based question generators used ontologies to auto-generate questions, which aimed to assess students' at different levels in Bloom's taxonomy. However, the evaluation of the questions was confined to measuring the qualitative satisfaction of domain experts and students. None of the question generators tested the questions on students and analysed the quality of the auto-generated
questions by examining the question's difficulty, and the question's ability to discriminate between high ability and low ability students. The lack of quantitative analysis resulted in having no evidence on the quality of questions,
and how the quality is a�affected by the ontology-based generation strategies, and the level of question in Bloom's taxonomy (determined by the question's stem templates). This paper presents an experiment carried out to address the drawbacks mentioned above by achieving two objectives. First, it assesses the auto-generated questions' difficulty, discrimination, and reliability using two
statistical methods: Classical Test Theory (CTT) and Item Response Theory (IRT). Second, it studies the effect of the ontology-based generation strategies and the level of the questions in Bloom's taxonomy on the quality of the questions. This will provide guidance for developers and researchers working in the field of ontology-based question generators, and help building a prediction
model using machine learning techniques