62 research outputs found

    Metamodel for personalized adaptation of pedagogical strategies using metacognition in Intelligent Tutoring Systems

    Get PDF
    The modeling process of metacognitive functions in Intelligent Tutoring Systems (ITS) is a difficult and time-consuming task. In particular when the integration of several metacognitive components, such as self-regulation and metamemory is needed. Metacognition has been used in Artificial Intelligence (AI) to improve the performance of complex systems such as ITS. However the design ITS with metacognitive capabilities is a complex task due to the number and complexity of processes involved. The modeling process of ITS is in itself a difficult task and often requires experienced designers and programmers, even when using authoring tools. In particular the design of the pedagogical strategies for an ITS is complex and requires the interaction of a number of variables that define it as a dynamic process. This doctoral thesis presents a metamodel for the personalized adaptation of pedagogical strategies integrating metamemory and self-regulation in ITS. The metamodel called MPPSM (Metamodel of Personalized adaptation of Pedagogical Strategies using Metacognition in intelligent tutoring systems) was synthetized from the analysis of 40 metacognitive models and 45 ITS models that exist in the literature. MPPSMhas a conceptual architecture with four levels of modeling according to the standard Meta- Object Facility (MOF) of Model-Driven Architecture (MDA) methodology. MPPSM enables designers to have modeling tools in early stage of software development process to produce more robust ITS that are able to self-regulate their own reasoning and learning processes. In this sense, a concrete syntax composed of a graphic notation called M++ was defined in order to make the MPPSM metamodel more usable. M++ is a Domain-Specific Visual Language (DSVL) for modeling metacognition in ITS. M++ has approximately 20 tools for modeling metacognitive systems with introspective monitoring and meta-level control. MPPSM allows the generation of metacognitive models using M++ in a visual editor named MetaThink. In MPPSM-based models metacognitive components required for monitoring and executive control of the reasoning processes take place in each module of an ITS can be specified. MPPSM-based models represent the cycle of reasoning of an ITS about: (i) failures generated in its own reasoning tasks (e.g. self-regulation); and (ii) anomalies in events that occur in its Long-Term Memory (LTM) (e.g. metamemory). A prototype of ITS called FUNPRO was developed for the validation of the performance of metacognitive mechanism of MPPSM in the process of the personalization of pedagogical strategies regarding to the preferences and profiles of real students. FUNPRO uses self-regulation to monitor and control the processes of reasoning at object-level and metamemory for the adaptation to changes in the constraints of information retrieval tasks from LTM. The major contributions of this work are: (i) the MOF-based metamodel for the personalization of pedagogical strategies using computational metacognition in ITS; (ii) the M++ DSVL for modeling metacognition in ITS; and (iii) the ITS prototype called FUNPRO (FUNdamentos de PROgramaciĂłn) that aims to provide personalized instruction in the subject of Introduction to Programming. The results given in the experimental tests demonstrate: (i) metacognitive models generated are consistent with the MPPSM metamodel; (ii) positive perceptions of users with respect to the proposed DSVL and it provide preliminary information concerning the quality of the concrete syntax of M++; (iii) in FUNPRO, multi-level pedagogical model enhanced with metacognition allows the dynamic adaptation of the pedagogical strategy according to the profile of each student.Doctorad

    The Multimodal Tutor: Adaptive Feedback from Multimodal Experiences

    Get PDF
    This doctoral thesis describes the journey of ideation, prototyping and empirical testing of the Multimodal Tutor, a system designed for providing digital feedback that supports psychomotor skills acquisition using learning and multimodal data capturing. The feedback is given in real-time with machine-driven assessment of the learner's task execution. The predictions are tailored by supervised machine learning models trained with human annotated samples. The main contributions of this thesis are: a literature survey on multimodal data for learning, a conceptual model (the Multimodal Learning Analytics Model), a technological framework (the Multimodal Pipeline), a data annotation tool (the Visual Inspection Tool) and a case study in Cardiopulmonary Resuscitation training (CPR Tutor). The CPR Tutor generates real-time, adaptive feedback using kinematic and myographic data and neural networks

    Improving Hybrid Brainstorming Outcomes with Scripting and Group Awareness Support

    Get PDF
    Previous research has shown that hybrid brainstorming, which combines individual and group methods, generates more ideas than either approach alone. However, the quality of these ideas remains similar across different methods. This study, guided by the dual-pathway to creativity model, tested two computer-supported scaffolds – scripting and group awareness support – for enhancing idea quality in hybrid brainstorming. 94 higher education students,grouped into triads, were tasked with generating ideas in three conditions. The Control condition used standard hybrid brainstorming without extra support. In the Experimental 1 condition, students received scripting support during individual brainstorming, and students in the Experimental 2 condition were provided with group awareness support during the group phase in addition. While the quantity of ideas was similar across all conditions, the Experimental 2 condition produced ideas of higher quality, and the Experimental 1 condition also showed improved idea quality in the individual phase compared to the Control condition

    The student-produced electronic portfolio in craft education

    Get PDF
    The authors studied primary school students’ experiences of using an electronic portfolio in their craft education over four years. A stimulated recall interview was applied to collect user experiences and qualitative content analysis to analyse the collected data. The results indicate that the electronic portfolio was experienced as a multipurpose tool to support learning. It makes the learning process visible and in that way helps focus on and improves the quality of learning. © ISLS.Peer reviewe

    Cognitive Foundations for Visual Analytics

    Get PDF
    In this report, we provide an overview of scientific/technical literature on information visualization and VA. Topics discussed include an update and overview of the extensive literature search conducted for this study, the nature and purpose of the field, major research thrusts, and scientific foundations. We review methodologies for evaluating and measuring the impact of VA technologies as well as taxonomies that have been proposed for various purposes to support the VA community. A cognitive science perspective underlies each of these discussions

    Semantic discovery and reuse of business process patterns

    Get PDF
    Patterns currently play an important role in modern information systems (IS) development and their use has mainly been restricted to the design and implementation phases of the development lifecycle. Given the increasing significance of business modelling in IS development, patterns have the potential of providing a viable solution for promoting reusability of recurrent generalized models in the very early stages of development. As a statement of research-in-progress this paper focuses on business process patterns and proposes an initial methodological framework for the discovery and reuse of business process patterns within the IS development lifecycle. The framework borrows ideas from the domain engineering literature and proposes the use of semantics to drive both the discovery of patterns as well as their reuse

    The Big Five:Addressing Recurrent Multimodal Learning Data Challenges

    Get PDF
    The analysis of multimodal data in learning is a growing field of research, which has led to the development of different analytics solutions. However, there is no standardised approach to handle multimodal data. In this paper, we describe and outline a solution for five recurrent challenges in the analysis of multimodal data: the data collection, storing, annotation, processing and exploitation. For each of these challenges, we envision possible solutions. The prototypes for some of the proposed solutions will be discussed during the Multimodal Challenge of the fourth Learning Analytics & Knowledge Hackathon, a two-day hands-on workshop in which the authors will open up the prototypes for trials, validation and feedback

    Multimodal Challenge: Analytics Beyond User-computer Interaction Data

    Get PDF
    This contribution describes one the challenges explored in the Fourth LAK Hackathon. This challenge aims at shifting the focus from learning situations which can be easily traced through user-computer interactions data and concentrate more on user-world interactions events, typical of co-located and practice-based learning experiences. This mission, pursued by the multimodal learning analytics (MMLA) community, seeks to bridge the gap between digital and physical learning spaces. The “multimodal” approach consists in combining learners’ motoric actions with physiological responses and data about the learning contexts. These data can be collected through multiple wearable sensors and Internet of Things (IoT) devices. This Hackathon table will confront with three main challenges arising from the analysis and valorisation of multimodal datasets: 1) the data collection and storing, 2) the data annotation, 3) the data processing and exploitation. Some research questions which will be considered in this Hackathon challenge are the following: how to process the raw sensor data streams and extract relevant features? which data mining and machine learning techniques can be applied? how can we compare two action recordings? How to combine sensor data with Experience API (xAPI)? what are meaningful visualisations for these data
    • …
    corecore