13 research outputs found

    Widening the Knowledge Acquisition Bottleneck for Intelligent Tutoring Systems

    Get PDF
    Empirical studies have shown that Intelligent Tutoring Systems (ITS) are effective tools for education. However, developing an ITS is a labour-intensive and time-consuming process. A major share of the development effort is devoted to acquiring the domain knowledge that accounts for the intelligence of the system. The goal of this research is to reduce the knowledge acquisition bottleneck and enable domain experts to build the domain model required for an ITS. In pursuit of this goal an authoring system capable of producing a domain model with the assistance of a domain expert was developed. Unlike previous authoring systems, this system (named CAS) has the ability to acquire knowledge for non-procedural as well as procedural tasks. CAS was developed to generate the knowledge required for constraint-based tutoring systems, reducing the effort as well as the amount of expertise in knowledge engineering and programming required. Constraint-based modelling is a student modelling technique that assists in somewhat easing the knowledge acquisition bottleneck due to the abstract representation. CAS expects the domain expert to provide an ontology of the domain, example problems and their solutions. It uses machine learning techniques to reason with the information provided by the domain expert for generating a domain model. A series of evaluation studies of this research produced promising results. The initial evaluation revealed that the task of composing an ontology of the domain assisted with the manual composition of a domain model. The second study showed that CAS was effective in generating constraints for the three vastly different domains of database modelling, data normalisation and fraction addition. The final study demonstrated that CAS was also effective in generating constraints when assisted by novice ITS authors, producing constraint sets that were over 90% complete

    Automating the analysis of problem-solving activities in learning environments: the co-lab case study

    Get PDF
    The analysis of problem-solving activities carried out by students in learning settings involves studying the students' actions and assessing the solutions they have created. This analysis constitutes an ideal starting point to support an automatic intervention in the student activity by means of feedback or other means to help students build their own knowledge. In this paper, we present a model-driven framework to facilitate the automation of this problemsolving analysis and of providing feedback. This framework includes a set of authoring tools that enable software developers to specify the analysis process and its intervention mechanisms by means of visual languages. The models specified in this way are computed by the framework in order to create technological support to automate the problem-solving analysis. The use of the framework is illustrated thanks to a case study in the field of System Dynamics where problem-solving practices are analysed.The Ministerio de Educaci贸n y Ciencia (Espa帽a) has partially supported this research under Project TIN2011-29542-C02-02. The authors would like to express their gratitude to Ton de Jong, Wouter R. van Joolingen and Sylvia van Borkulo (University of Twente), for supporting this research. The work reported here was done during Rafael Duque鈥檚 stay at the Department of Instructional Technology of the University of Twente

    Evaluating and improving adaptive educational systems with learning curves

    Get PDF
    Personalised environments such as adaptive educational systems can be evaluated and compared using performance curves. Such summative studies are useful for determining whether or not new modifications enhance or degrade performance. Performance curves also have the potential to be utilised in formative studies that can shape adaptive model design at a much finer level of granularity. We describe the use of learning curves for evaluating personalised educational systems and outline some of the potential pitfalls and how they may be overcome. We then describe three studies in which we demonstrate how learning curves can be used to drive changes in the user model. First, we show how using learning curves for subsets of the domain model can yield insight into the appropriateness of the model鈥檚 structure. In the second study we use this method to experiment with model granularity. Finally, we use learning curves to analyse a large volume of user data to explore the feasibility of using them as a reliable method for fine-tuning a system鈥檚 model. The results of these experiments demonstrate the successful use of performance curves in formative studies of adaptive educational systems
    corecore