92 research outputs found

    Developing Student Model for Intelligent Tutoring System

    Get PDF
    The effectiveness of an e-learning environment mainly encompasses on how efficiently the tutor presents the learning content to the candidate based on their learning capability. It is therefore inevitable for the teaching community to understand the learning style of their students and to cater for the needs of their students. One such system that can cater to the needs of the students is the Intelligent Tutoring System (ITS). To overcome the challenges faced by the teachers and to cater to the needs of their students, e-learning experts in recent times have focused in Intelligent Tutoring System (ITS). There is sufficient literature that suggested that meaningful, constructive and adaptive feedback is the essential feature of ITSs, and it is such feedback that helps students achieve strong learning gains. At the same time, in an ITS, it is the student model that plays a main role in planning the training path, supplying feedback information to the pedagogical module of the system. Added to it, the student model is the preliminary component, which stores the information to the specific individual learner. In this study, Multiple-choice questions (MCQs) was administered to capture the student ability with respect to three levels of difficulty, namely, low, medium and high in Physics domain to train the neural network. Further, neural network and psychometric analysis were used for understanding the student characteristic and determining the student’s classification with respect to their ability. Thus, this study focused on developing a student model by using the Multiple-Choice Questions (MCQ) for integrating it with an ITS by applying the neural network and psychometric analysis. The findings of this research showed that even though the linear regression between real test scores and that of the Final exam scores were marginally weak (37%), still the success of the student classification to the extent of 80 percent (79.8%) makes this student model a good fit for clustering students in groups according to their common characteristics. This finding is in line with that of the findings discussed in the literature review of this study. Further, the outcome of this research is most likely to generate a new dimension for cluster based student modelling approaches for an online learning environment that uses aptitude tests (MCQ’s) for learners using ITS. The use of psychometric analysis and neural network for student classification makes this study unique towards the development of a new student model for ITS in supporting online learning. Therefore, the student model developed in this study seems to be a good model fit for all those who wish to infuse aptitude test based student modelling approach in an ITS system for an online learning environment. (Abstract by Author

    A Systematic Review of Data-Driven Approaches to Item Difficulty Prediction.

    Get PDF
    Assessment quality and validity is heavily reliant on the quality of items included in an assessment or test. Difficulty is an essential factor that can determine items and tests’ overall quality. Therefore, item difficulty prediction is extremely important in any pedagogical learning environment. Data-driven approaches to item difficulty prediction are gaining more and more prominence, as demonstrated by the recent literature. In this paper, we provide a systematic review of data-driven approaches to item difficulty prediction. Of the 148 papers that were identified that cover item difficulty prediction, 38 papers were selected for the final analysis. A classification of the different approaches used to predict item difficulty is presented, together with the current practices for item difficulty prediction with respect to the learning algorithms used, and the most influential difficulty features that were investigated

    Ontology-Based Multiple Choice Question Generation

    Get PDF
    With recent advancements in Semantic Web technologies, a new trend in MCQ item generation has emerged through the use of ontologies. Ontologies are knowledge representation structures that formally describe entities in a domain and their relationships, thus enabling automated inference and reasoning. Ontology-based MCQ item generation is still in its infancy, but substantial research efforts are being made in the field. However, the applicability of these models for use in an educational setting has not been thoroughly evaluated. In this paper, we present an experimental evaluation of an ontology-based MCQ item generation system known as OntoQue. The evaluation was conducted using two different domain ontologies. The findings of this study show that ontology-based MCQ generation systems produce satisfactory MCQ items to a certain extent. However, the evaluation also revealed a number of shortcomings with current ontology-based MCQ item generation systems with regard to the educational significance of an automatically constructed MCQ item, the knowledge level it addresses, and its language structure. Furthermore, for the task to be successful in producing high-quality MCQ items for learning assessments, this study suggests a novel, holistic view that incorporates learning content, learning objectives, lexical knowledge, and scenarios into a single cohesive framework

    Ontology Validation & Utilisation For Personalised Feedback In Education

    Get PDF
    Virtual Learning Environments provide teachers with a web-based platform to create different types of feedback which vary in the level of details given in the feedback content. Types of feedback can range from a simple correct or vice-versa to a detailed explanation about the reason why the correct answer is correct and the incorrect answer is incorrect. However, these environments usually follow the ‘one size fits all’ approach and provide all students with the same type of feedback regardless of students’ individual characteristics and the assessment question’s individual characteristics. This approach is likely to negatively affect students’ performance and learning gain. Several personalised feedback frameworks have been proposed which adapt the different types of feedback based on the student characteristics and/or the assessment question characteristics. The frameworks have three drawbacks: firstly, creating the different types of feedback is a time consuming process, as the types of feedback are either hard-coded or auto-generated from a restricted set of solutions created by the teacher or a domain expert; secondly, they are domain dependent and cannot be used to auto-generate feedback across different educational domains; thirdly, they have not attempted any integration which takes into consideration both the characteristics of the assessment questions and the student’s characteristics. This thesis contributes to research carried out on personalised feedback frameworks by proposing a generic novel system which is called the Ontology-based Personalised Feedback Generator (OntoPeFeGe). OntoPeFeGe has three aims: firstly, it uses any pre-existing domain ontology which is a knowledge representation of the educational domain to auto-generate assessment questions with different characteristics, in particular, questions aimed to assess students at different levels in Bloom’s taxonomy1; secondly, it associates each auto-generated question with specialised domain independent types of feedback; thirdly, it provides students with personalised feedback which adapts the types of feedback based on the student and the assessment question characteristics. OntoPeFeGe allowed the integration of student’s characteristics, the assessment question’s characteristics, and the personalised feedback, for the first time. The experimental results applying OntoPeFeGe in a real educational environment revealed that the personalised feedback particularly improved the performance of students with initial low background knowledge. Moreover, the personalised feedback improved students’ learning gain significantly at questions designed to assess the students at high levels in Bloom’s taxonomy. In addition, OntoPeFeGe is the first prototype to quantitatively analyse the quality of auto-generated questions and tests, and to provide question design guidance for developers and researchers working in the field of question generators. OntoPeFeGe could be applied to any educational field captured in an ontology. However, assessing how suitable the ontology is for generating questions and feedback, as well as how it represents the subject domain of interest, is a necessary requirement to using the ontology in OntoPeFeGe. Therefore, this thesis also presents a novel method termed Terminological ONtology Evaluator (TONE) which uses the educational corpus (e.g., textbooks and lecture slides) to evaluate the domain ontologies. TONE has been evaluated experimentally showing its potential as an evaluation method for educational ontologies

    Evaluating the quality of the ontology-based auto-generated questions

    Get PDF
    An ontology is a knowledge representation structure which has been used in Virtual Learning Environments (VLEs) to describe educational courses by capturing the concepts and the relationships between them. Several ontology-based question generators used ontologies to auto-generate questions, which aimed to assess students' at different levels in Bloom's taxonomy. However, the evaluation of the questions was confined to measuring the qualitative satisfaction of domain experts and students. None of the question generators tested the questions on students and analysed the quality of the auto-generated questions by examining the question's difficulty, and the question's ability to discriminate between high ability and low ability students. The lack of quantitative analysis resulted in having no evidence on the quality of questions, and how the quality is aïżœaffected by the ontology-based generation strategies, and the level of question in Bloom's taxonomy (determined by the question's stem templates). This paper presents an experiment carried out to address the drawbacks mentioned above by achieving two objectives. First, it assesses the auto-generated questions' difficulty, discrimination, and reliability using two statistical methods: Classical Test Theory (CTT) and Item Response Theory (IRT). Second, it studies the effect of the ontology-based generation strategies and the level of the questions in Bloom's taxonomy on the quality of the questions. This will provide guidance for developers and researchers working in the field of ontology-based question generators, and help building a prediction model using machine learning techniques

    Towards natural language question generation for the validation of ontologies and mappings

    Get PDF
    Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)The increasing number of open-access ontologies and their key role in several applications such as decision-support systems highlight the importance of their validation. Human expertise is crucial for the validation of ontologies from a domain point-of-view. However, the growing number of ontologies and their fast evolution over time make manual validation challenging. Methods: We propose a novel semi-automatic approach based on the generation of natural language (NL) questions to support the validation of ontologies and their evolution. The proposed approach includes the automatic generation, factorization and ordering of NL questions from medical ontologies. The final validation and correction is performed by submitting these questions to domain experts and automatically analyzing their feedback. We also propose a second approach for the validation of mappings impacted by ontology changes. The method exploits the context of the changes to propose correction alternatives presented as Multiple Choice Questions. Results: This research provides a question optimization strategy to maximize the validation of ontology entities with a reduced number of questions. We evaluate our approach for the validation of three medical ontologies. We also evaluate the feasibility and efficiency of our mappings validation approach in the context of ontology evolution. These experiments are performed with different versions of SNOMED-CT and ICD9. Conclusions: The obtained experimental results suggest the feasibility and adequacy of our approach to support the validation of interconnected and evolving ontologies. Results also suggest that taking into account RDFS and OWL entailment helps reducing the number of questions and validation time. The application of our approach to validate mapping evolution also shows the difficulty of adapting mapping evolution over time and highlights the importance of semi-automatic validation.The increasing number of open-access ontologies and their key role in several applications such as decision-support systems highlight the importance of their validation. Human expertise is crucial for the validation of ontologies from a domain point-of-vi7115FAPESP - FUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DE SÃO PAULOFundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)2014/14890-

    Algorithms for assessing the quality and difficulty of multiple choice exam questions

    Get PDF
    Multiple Choice Questions (MCQs) have long been the backbone of standardized testing in academia and industry. Correspondingly, there is a constant need for the authors of MCQs to write and refine new questions for new versions of standardized tests as well as to support measuring performance in the emerging massive open online courses, (MOOCs). Research that explores what makes a question difficult, or what questions distinguish higher-performing students from lower-performing students can aid in the creation of the next generation of teaching and evaluation tools. In the automated MCQ answering component of this thesis, algorithms query for definitions of scientific terms, process the returned web results, and compare the returned definitions to the original definition in the MCQ. This automated method for answering questions is then augmented with a model, based on human performance data from crowdsourced question sets, for analysis of question difficulty as well as the discrimination power of the non-answer alternatives. The crowdsourced question sets come from PeerWise, an open source online college-level question authoring and answering environment. The goal of this research is to create an automated method to both answer and assesses the difficulty of multiple choice inverse definition questions in the domain of introductory biology. The results of this work suggest that human-authored question banks provide useful data for building gold standard human performance models. The methodology for building these performance models has value in other domains that test the difficulty of questions and the quality of the exam takers
    • 

    corecore