11,682 research outputs found
Optimising ITS behaviour with Bayesian networks and decision theory
We propose and demonstrate a methodology for building tractable normative intelligent tutoring systems (ITSs). A normative ITS uses a Bayesian network for long-term student modelling and decision theory to select the next tutorial action. Because normative theories are a general framework for rational behaviour, they can be used to both define and apply learning theories in a rational, and therefore optimal, way. This contrasts to the more traditional approach of using an ad-hoc scheme to implement the learning theory. A key step of the methodology is the induction and the continual adaptation of the Bayesian network student model from student performance data, a step that is distinct from other recent Bayesian net approaches in which the network structure and probabilities are either chosen beforehand by an expert, or by efficiency considerations. The methodology is demonstrated by a description and evaluation of CAPIT, a normative constraint-based tutor for English capitalisation and punctuation. Our evaluation results show that a class using the full normative version of CAPIT learned the domain rules at a faster rate than the class that used a non-normative version of the same system
Recommended from our members
An automated test assembly for unidimensional IRT tests containing cognitive diagnostic elements
textLarge-scale assessments are typically administered numerous times per year using
parallel test forms. The traditional methods of constructing parallel test forms are based
on manually selecting items for given test specifications such as content balancing. This
methods are cumbersome, time consuming, and inefficient. To overcome these
problems, an automated test assembly has been used successfully in test construction to
assemble conventional IRT tests (van der Linden, 1994). However, these conventional
large-scale assessments only provide a single summary score that indicates the overall
performance level or achievement level of a student in a single learning area. For
assessments to be more effective, tests should provide useful diagnostic information in
addition to single overall scores. One approach is using a Cognitive Diagnosis modeling.
The purpose of this research is to develop an algorithm for generating information-rich
tests by combining Cognitive Diagnosis with the traditional IRT approach that not only
produce a single score to measure an examinee’s ability level but also provide diagnostic
information. This study describes a new method of automated test assembly, which
incorporates diagnostic techniques with existing IRT-based testing assembly methods.
The purpose of Cognitive Diagnosis modeling is to provide useful information by
estimating individual knowledge states by assessing whether an examinee has mastered
specific attributes measured by the test (Embretson, 1990; DiBello, Stout, & Rousses,
1995; Tatsuoka, 1995). Attributes are skills or cognitive processes that are required to
perform correctly on a particular item. If standardized testing could incorporate
assessments of the various attributes constituting the item, then students, parents, and
teachers would be able to see where a student stands with respect to mastering the item.
Such information could be used to guide the learner toward areas requiring more study.
Helping students to identify their intellectual strengths and weaknesses is more
informative and instructive than simply giving them a single score that represents their
overall ability. By being able to assess where they stand in regard to the attributes that
compose an item, students can plan a more effective learning path to be desired
proficiency levels.
Even though Cognitive Diagnosis has attracted considerable attention from
researchers, few studies have described how to assemble a test that conforms to given
cognitive criteria. If such a test could be assembled, it would provide more specific
identification of the areas where students needs to improve their skills. Also, it would
provide diagnostic feedback to teachers, who could then address the specific needs of
individual students. In this way, the test becomes an active tool in the educational
process rather than just a passive score report.
The proposed automated test assembly method and its corresponding computer
algorithm will be developed to construct tests automatically from a given item bank while
assuring the tests conform to specifications from both conventional IRT scaling and the
Cognitive Diagnostic aspects. The method employs the commonly used Zero-One (0/1)
Linear Programming Method. This study describes a new method of automated test
assembly, which incorporates diagnostic techniques with existing IRT-based testing
assembly methods using Maxmin, Minimax, and Maximum Information Methods. A
major goal of this research is to identify a set of the most reasonable constraints in
Cognitive Diagnosis and to integrate those new constraints into traditional IRT scaling.
Most traditional test assembly methods tend to select best test items to form a test
under given test specifications, such as content balancing, item difficulties, item formats,
reliabilities, test length, and many more (van der Linden, 1998). For this research, a
component to deal with Cognitive Diagnosis is added to the current existing automated
test assembly method based on IRT. The research described in this dissertation sought
to apply and improve available technologies to automate this task and thereby contribute
to a new area of educational research. By implementing the Cognitive Diagnostic
approach within the traditional standardized test assembly methods, testing specialists
will find that using the algorithm introduced in this dissertation might prove useful to test
development.Educational Psycholog
Intelligent tutoring systems for systems engineering methodologies
The general goal is to provide the technology required to build systems that can provide intelligent tutoring in IDEF (Integrated Computer Aided Manufacturing Definition Method) modeling. The following subject areas are covered: intelligent tutoring systems for systems analysis methodologies; IDEF tutor architecture and components; developing cognitive skills for IDEF modeling; experimental software; and PC based prototype
Counterfactual Monotonic Knowledge Tracing for Assessing Students' Dynamic Mastery of Knowledge Concepts
As the core of the Knowledge Tracking (KT) task, assessing students' dynamic
mastery of knowledge concepts is crucial for both offline teaching and online
educational applications. Since students' mastery of knowledge concepts is
often unlabeled, existing KT methods rely on the implicit paradigm of
historical practice to mastery of knowledge concepts to students' responses to
practices to address the challenge of unlabeled concept mastery. However,
purely predicting student responses without imposing specific constraints on
hidden concept mastery values does not guarantee the accuracy of these
intermediate values as concept mastery values. To address this issue, we
propose a principled approach called Counterfactual Monotonic Knowledge Tracing
(CMKT), which builds on the implicit paradigm described above by using a
counterfactual assumption to constrain the evolution of students' mastery of
knowledge concepts.Comment: Accepted by CIKM 2023, 10 pages, 5 figures, 4 table
Development of a personal-computer-based intelligent tutoring system
A large number of Intelligent Tutoring Systems (ITSs) have been built since they were first proposed in the early 1970's. Research conducted on the use of the best of these systems has demonstrated their effectiveness in tutoring in selected domains. A prototype ITS for tutoring students in the use of CLIPS language: CLIPSIT (CLIPS Intelligent Tutor) was developed. For an ITS to be widely accepted, not only must it be effective, flexible, and very responsive, it must also be capable of functioning on readily available computers. While most ITSs have been developed on powerful workstations, CLIPSIT is designed for use on the IBM PC/XT/AT personal computer family (and their clones). There are many issues to consider when developing an ITS on a personal computer such as the teaching strategy, user interface, knowledge representation, and program design methodology. Based on experiences in developing CLIPSIT, results on how to address some of these issues are reported and approaches are suggested for maintaining a powerful learning environment while delivering robust performance within the speed and memory constraints of the personal computer
Developing Student Model for Intelligent Tutoring System
The effectiveness of an e-learning environment mainly encompasses on how efficiently the tutor presents the
learning content to the candidate based on their learning capability. It is therefore inevitable for the teaching
community to understand the learning style of their students and to cater for the needs of their students. One
such system that can cater to the needs of the students is the Intelligent Tutoring System (ITS). To overcome
the challenges faced by the teachers and to cater to the needs of their students, e-learning experts in recent times
have focused in Intelligent Tutoring System (ITS). There is sufficient literature that suggested that meaningful,
constructive and adaptive feedback is the essential feature of ITSs, and it is such feedback that helps students
achieve strong learning gains. At the same time, in an ITS, it is the student model that plays a main role in
planning the training path, supplying feedback information to the pedagogical module of the system. Added to
it, the student model is the preliminary component, which stores the information to the specific individual
learner. In this study, Multiple-choice questions (MCQs) was administered to capture the student ability with
respect to three levels of difficulty, namely, low, medium and high in Physics domain to train the neural
network. Further, neural network and psychometric analysis were used for understanding the student
characteristic and determining the student’s classification with respect to their ability. Thus, this study focused
on developing a student model by using the Multiple-Choice Questions (MCQ) for integrating it with an ITS
by applying the neural network and psychometric analysis. The findings of this research showed that even
though the linear regression between real test scores and that of the Final exam scores were marginally weak
(37%), still the success of the student classification to the extent of 80 percent (79.8%) makes this student model
a good fit for clustering students in groups according to their common characteristics. This finding is in line
with that of the findings discussed in the literature review of this study. Further, the outcome of this research is
most likely to generate a new dimension for cluster based student modelling approaches for an online learning
environment that uses aptitude tests (MCQ’s) for learners using ITS. The use of psychometric analysis and
neural network for student classification makes this study unique towards the development of a new student
model for ITS in supporting online learning. Therefore, the student model developed in this study seems to be
a good model fit for all those who wish to infuse aptitude test based student modelling approach in an ITS
system for an online learning environment. (Abstract by Author
Recommended from our members
Diagnostic Classification Modeling of Rubric-Scored Constructed-Response Items
The need for formative assessments has led to the development of a psychometric framework known as diagnostic classification models (DCMs), which are mathematical measurement models designed to estimate the possession or mastery of a designated set of skills or attributes within a chosen construct. Furthermore, much research has gone into the practice of “retrofitting” diagnostic measurement models to existing assessments in order to improve their diagnostic capability. Although retrofitting DCMs to existing assessments can theoretically improve diagnostic potential, it is also prone to challenges including identifying multidimensional traits from largely unidimensional assessments, a lack of assessments that are suitable for the DCM framework, and statistical quality, specifically highly correlated attributes and poor model fit. Another recent trend in assessment has been a move towards creating more authentic constructed-response assessments. For such assessments, rubric-based scoring is often seen as method of providing reliable scoring and interpretive formative feedback. However, rubric-scored tests are limited in their diagnostic potential in that they are usually used to assign unidimensional numeric scores.
It is the purpose of this thesis to propose general methods for retrofitting DCMs to rubric-scored assessments. Two methods will be proposed and compared: (1) automatic construction of an attribute hierarchy to represent all possible numeric score levels from a rubric-scored assessment and (2) using rubric criterion score level descriptions to imply an attribute hierarchy. This dissertation will describe these methods, discuss the technical and mathematical issues that arise in using them, and apply and compare both methods to a prominent rubric-scored test of critical thinking skills, the Collegiate Learning Assessment+ (CLA+). Finally, the utility of the proposed methods will be compared to a reasonable alternative methodology: the use of polytomous IRT models, including the Graded Response Model (GRM), the Partial Credit Model (PCM), and the Generalized-Partial Credit Model (G-PCM), for this type of test score data
- …