2,498 research outputs found
Affective learning: improving engagement and enhancing learning with affect-aware feedback
This paper describes the design and ecologically valid evaluation of a learner model that lies at the heart of an intelligent learning environment called iTalk2Learn. A core objective of the learner model is to adapt formative feedback based on studentsâ affective states. Types of adaptation include what type of formative feedback should be provided and how it should be presented. Two Bayesian networks trained with data gathered in a series of Wizard-of-Oz studies are used for the adaptation process. This paper reports results from a quasi-experimental evaluation, in authentic classroom settings, which compared a version of iTalk2Learn that adapted feedback based on studentsâ affective states as they were talking aloud with the system (the affect condition) with one that provided feedback based only on the studentsâ performance (the non-affect condition). Our results suggest that affect-aware support contributes to reducing boredom and off-task behavior, and may have an effect on learning. We discuss the internal and ecological validity of the study, in light of pedagogical considerations that informed the design of the two conditions. Overall, the results of the study have implications both for the design of educational technology and for classroom approaches to teaching, because they highlight the important role that affect-aware modelling plays in the adaptive delivery of formative feedback to support learning
Recommended from our members
A Talk on the Wild Side: The Direct and Indirect Impact of Speech Recognition on Learning Gains
Research in the learning sciences and mathematics education has suggested that âthinking aloudâ (verbalization) can be important for learning. In a technology-mediated learning environment, speech might also help to promote learning by enabling the system to infer the studentsâ cognitive and affective state so that they can be provided a
sequence of tasks and formative feedback, both of which are adapted to their needs. For these and associated reasons, we developed the iTalk2Learn platform that includes speech production and speech recognition for children learning about fractions. We investigated the impact of iTalk2Learnâs speech functionality in classrooms in the UK and Germany, with our results indicating that a speech-enabled learning environment has the potential to enhance student learning gains and engagement, both directly and indirectly
On the Development of Adaptive and User-Centred Interactive Multimodal Interfaces
Multimodal systems have attained increased attention in recent years, which has made possible important
improvements in the technologies for recognition, processing, and generation of multimodal information.
However, there are still many issues related to multimodality which are not clear, for example, the
principles that make it possible to resemble human-human multimodal communication. This chapter
focuses on some of the most important challenges that researchers have recently envisioned for future
multimodal interfaces. It also describes current efforts to develop intelligent, adaptive, proactive, portable
and affective multimodal interfaces
Psychophysiology in games
Psychophysiology is the study of the relationship between psychology
and its physiological manifestations. That relationship is of particular importance
for both game design and ultimately gameplaying. Playersâ psychophysiology offers
a gateway towards a better understanding of playing behavior and experience.
That knowledge can, in turn, be beneficial for the player as it allows designers to
make better games for them; either explicitly by altering the game during play or
implicitly during the game design process. This chapter argues for the importance
of physiology for the investigation of player affect in games, reviews the current
state of the art in sensor technology and outlines the key phases for the application
of psychophysiology in games.The work is supported, in part, by the EU-funded FP7 ICT iLearnRWproject
(project no: 318803).peer-reviewe
Towards the Use of Dialog Systems to Facilitate Inclusive Education
Continuous advances in the development of information technologies have currently led to the possibility
of accessing learning contents from anywhere, at anytime, and almost instantaneously. However,
accessibility is not always the main objective in the design of educative applications, specifically to
facilitate their adoption by disabled people. Different technologies have recently emerged to foster the
accessibility of computers and new mobile devices, favoring a more natural communication between
the student and the developed educative systems. This chapter describes innovative uses of multimodal
dialog systems in education, with special emphasis in the advantages that they provide for creating
inclusive applications and learning activities
Training an adaptive dialogue policy for interactive learning of visually grounded word meanings
We present a multi-modal dialogue system for interactive learning of
perceptually grounded word meanings from a human tutor. The system integrates
an incremental, semantic parsing/generation framework - Dynamic Syntax and Type
Theory with Records (DS-TTR) - with a set of visual classifiers that are
learned throughout the interaction and which ground the meaning representations
that it produces. We use this system in interaction with a simulated human
tutor to study the effects of different dialogue policies and capabilities on
the accuracy of learned meanings, learning rates, and efforts/costs to the
tutor. We show that the overall performance of the learning agent is affected
by (1) who takes initiative in the dialogues; (2) the ability to express/use
their confidence level about visual attributes; and (3) the ability to process
elliptical and incrementally constructed dialogue turns. Ultimately, we train
an adaptive dialogue policy which optimises the trade-off between classifier
accuracy and tutoring costs.Comment: 11 pages, SIGDIAL 2016 Conferenc
An evaluation of an adaptive learning system based on multimodal affect recognition for learners with intellectual disabilities
Artificial intelligence tools for education (AIEd) have been used to automate the provision of learning support to mainstream learners. One of the most innovative approaches in this field is the use of data and machine learning for the detection of a student's affective state, to move them out of negative states that inhibit learning, into positive states such as engagement. In spite of their obvious potential to provide the personalisation that would give extra support for learners with intellectual disabilities, little work on AIEd systems that utilise affect recognition currently addresses this group. Our system used multimodal sensor data and machine learning to first identify three affective states linked to learning (engagement, frustration, boredom) and second determine the presentation of learning content so that the learner is maintained in an optimal affective state and rate of learning is maximised. To evaluate this adaptive learning system, 67 participants aged between 6 and 18 years acting as their own control took part in a series of sessions using the system. Sessions alternated between using the system with both affect detection and learning achievement to drive the selection of learning content (intervention) and using learning achievement alone (control) to drive the selection of learning content. Lack of boredom was the state with the strongest link to achievement, with both frustration and engagement positively related to achievement. There was significantly more engagement and less boredom in intervention than control sessions, but no significant difference in achievement. These results suggest that engagement does increase when activities are tailored to the personal needs and emotional state of the learner and that the system was promoting affective states that in turn promote learning. However, longer exposure is necessary to determine the effect on learning
Affect and believability in game characters:a review of the use of affective computing in games
Virtual agents are important in many digital environments. Designing a character that highly engages users in terms of interaction is an intricate task constrained by many requirements. One aspect that has gained more attention recently is the effective dimension of the agent. Several studies have addressed the possibility of developing an affect-aware system for a better user experience. Particularly in games, including emotional and social features in NPCs adds depth to the characters, enriches interaction possibilities, and combined with the basic level of competence, creates a more appealing game. Design requirements for emotionally intelligent NPCs differ from general autonomous agents with the main goal being a stronger player-agent relationship as opposed to problem solving and goal assessment. Nevertheless, deploying an affective module into NPCs adds to the complexity of the architecture and constraints. In addition, using such composite NPC in games seems beyond current technology, despite some brave attempts. However, a MARPO-type modular architecture would seem a useful starting point for adding emotions
- âŠ