2 research outputs found

    Explicit Feedback Within Game-based Training: Examining The Influence Of Source Modality Effects On Interaction

    Get PDF
    This research aims to enhance Simulation-Based Training (SBT) applications to support training events in the absence of live instruction. The overarching purpose is to explore available tools for integrating intelligent tutoring communications in game-based learning platforms and to examine theory-based techniques for delivering explicit feedback in such environments. The primary tool influencing the design of this research was the Generalized Intelligent Framework for Tutoring (GIFT), a modular domain-independent architecture that provides the tools and methods to author, deliver, and evaluate intelligent tutoring technologies within any training platform. Influenced by research surrounding Social Cognitive Theory and Cognitive Load Theory, the resulting experiment tested varying approaches for utilizing an Embodied Pedagogical Agent (EPA) to function as a tutor during interaction in a game-based environment. Conditions were authored to assess the tradeoffs between embedding an EPA directly in a game, embedding an EPA in GIFT’s browser-based Tutor-User Interface (TUI), or using audio prompts alone with no social grounding. The resulting data supports the application of using an EPA embedded in GIFT’s TUI to provide explicit feedback during a game-based learning event. Analyses revealed conditions with an EPA situated in the TUI to be as effective as embedding the agent directly in the game environment. This inference is based on evidence showing reliable differences across conditions on the metrics of performance and self-reported mental demand and feedback usefulness items. This research provides source modality tradeoffs linked to tactics for relaying training relevant explicit information to a user based on real-time performance in a game

    Modeling Learner Mood In Realtime Through Biosensors For Intelligent Tutoring Improvements

    Get PDF
    Computer-based instructors, just like their human counterparts, should monitor the emotional and cognitive states of their students in order to adapt instructional technique. Doing so requires a model of student state to be available at run time, but this has historically been difficult. Because people are different, generalized models have not been able to be validated. As a person’s cognitive and affective state vary over time of day and seasonally, individualized models have had differing difficulties. The simultaneous creation and execution of an individualized model, in real time, represents the last option for modeling such cognitive and affective states. This dissertation presents and evaluates four differing techniques for the creation of cognitive and affective models that are created on-line and in real time for each individual user as alternatives to generalized models. Each of these techniques involves making predictions and modifications to the model in real time, addressing the real time datastream problems of infinite length, detection of new concepts, and responding to how concepts change over time. Additionally, with the knowledge that a user is physically present, this work investigates the contribution that the occasional direct user query can add to the overall quality of such models. The research described in this dissertation finds that the creation of a reasonable quality affective model is possible with an infinitesimal amount of time and without “ground truth” knowledge of the user, which is shown across three different emotional states. Creation of a cognitive model in the same fashion, however, was not possible via direct AI modeling, even with all of the “ground truth” information available, which is shown across four different cognitive states
    corecore