6 research outputs found

    Interventions to Regulate Confusion during Learning

    Get PDF
    Confusion provides opportunities to learn at deeper levels. However, learners must put forth the necessary effort to resolve their confusion to convert this opportunity into actual learning gains. Learning occurs when learners engage in cognitive activities beneficial to learning (e.g., reflection, deliberation, problem solving) during the process of confusion resolution. Unfortunately, learners are not always able to resolve their confusion on their own. The inability to resolve confusion can be due to a lack of knowledge, motivation, or skills. The present dissertation explored methods to aid confusion resolution and ultimately promote learning through a multi-pronged approach. First, a survey revealed that learners prefer more information and feedback when confused and that they preferred different interventions for confusion compared to boredom and frustration. Second, expert human tutors were found to most frequently handle learner confusion by providing direct instruction and responded differently to learner confusion compared to anxiety, frustration, and happiness. Finally, two experiments were conducted to test the effectiveness of pedagogical and motivational confusion regulation interventions. Both types of interventions were investigated within a learning environment that experimentally induced confusion via the presentation of contradictory information by two animated agents (tutor and peer student agents). Results showed across both studies that learner effort during the confusion regulation task impacted confusion resolution and that learning occurred when the intervention provided the opportunity for learners to stop, think, and deliberate about the concept being discussed. Implications for building more effective affect-sensitive learning environments are discussed

    Predicting the confusion level of text excerpts with syntactic, lexical and n-gram features

    Get PDF
    Distance learning, offline presentations (presentations that are not being carried in a live fashion but were instead pre-recorded) and such activities whose main goal is to convey information are getting increasingly relevant with digital media such as Virtual Reality (VR) and Massive Online Open Courses (MOOCs). While MOOCs are a well-established reality in the learning environment, VR is also being used to promote learning in virtual rooms, be it in the academia or in the industry. Oftentimes these methods are based on written scripts that take the learner through the content, making them critical components to these tools. With such an important role, it is important to ensure the efficiency of these scripts. Confusion is a non-basic emotion associated with learning. This process often leads to a cognitive disequilibrium either caused by the content itself or due to the way it is conveyed when it comes to its syntactic and lexical features. We hereby propose a supervised model that can predict the likelihood of confusion an input text excerpt can cause on the learner. To achieve this, we performed syntactic and lexical analyses over 300 text excerpts and collected 5 confusion level classifications (0 – 6) per excerpt from 51 annotators to use their respective means as labels. These examples that compose the dataset were collected from random presentations transcripts across various fields of knowledge. The learning model was trained with this data with the results being included in the body of the paper. This model allows the design of clearer scripts of offline presentations and similar approaches and we expect that it improves the efficiency of these speeches. While this model is applied to this specific case, we hope to pave the way to generalize this approach to other contexts where clearness of text is critical, such as the scripts of MOOCs or academic abstracts.info:eu-repo/semantics/acceptedVersio

    University of Memphis commencement, 2014 August. Program

    Get PDF
    Program for the Summer convocation of the 102nd commencement of the University of Memphis at Memphis, Tennessee, held at the FedEx Forum on August 10, 2014.https://digitalcommons.memphis.edu/speccoll-ua-pub-commencements/1180/thumbnail.jp

    Automatic Sensor-free Affect Detection: A Systematic Literature Review

    Full text link
    Emotions and other affective states play a pivotal role in cognition and, consequently, the learning process. It is well-established that computer-based learning environments (CBLEs) that can detect and adapt to students' affective states can enhance learning outcomes. However, practical constraints often pose challenges to the deployment of sensor-based affect detection in CBLEs, particularly for large-scale or long-term applications. As a result, sensor-free affect detection, which exclusively relies on logs of students' interactions with CBLEs, emerges as a compelling alternative. This paper provides a comprehensive literature review on sensor-free affect detection. It delves into the most frequently identified affective states, the methodologies and techniques employed for sensor development, the defining attributes of CBLEs and data samples, as well as key research trends. Despite the field's evident maturity, demonstrated by the consistent performance of the models and the application of advanced machine learning techniques, there is ample scope for future research. Potential areas for further exploration include enhancing the performance of sensor-free detection models, amassing more samples of underrepresented emotions, and identifying additional emotions. There is also a need to refine model development practices and methods. This could involve comparing the accuracy of various data collection techniques, determining the optimal granularity of duration, establishing a shared database of action logs and emotion labels, and making the source code of these models publicly accessible. Future research should also prioritize the integration of models into CBLEs for real-time detection, the provision of meaningful interventions based on detected emotions, and a deeper understanding of the impact of emotions on learning

    E3: Emotions, Engagement, and Educational Digital Games

    Get PDF
    The use of educational digital games as a method of instruction for science, technology, engineering, and mathematics has increased in the past decade. While these games provide successfully implemented interactive and fun interfaces, they are not designed to respond or remedy students’ negative affect towards the game dynamics or their educational content. Therefore, this exploratory study investigated the frequent patterns of student emotional and behavioral response to educational digital games. To unveil the sequential occurrence of these affective states, students were assigned to play the game for nine class sessions. During these sessions, their affective and behavioral response was recorded to uncover possible underlying patterns of affect (particularly confusion, frustration, and boredom) and behavior (disengagement). In addition, these affect and behavior frequency pattern data were combined with students’ gameplay data in order to identify patterns of emotions that led to a better performance in the game. The results provide information on possible affect and behavior patterns that could be used in further research on affect and behavior detection in such open-ended digital game environments. Particularly, the findings show that students experience a considerable amount of confusion, frustration, and boredom. Another finding highlights the need for remediation via embedded help, as the students referred to peer help often during their gameplay. However, possibly because of the low quality of the received help, students seemed to become frustrated or disengaged with the environment. Finally, the findings suggest the importance of the decay rate of confusion; students’ gameplay performance was associated with the length of time students remained confused or frustrated. Overall, these findings show that there are interesting patterns related to students who experience relatively negative emotions during their gameplay

    Virtual environments promoting interaction

    Get PDF
    Virtual reality (VR) has been widely researched in the academic environment and is now breaking into the industry. Regular companies do not have access to this technology as a collaboration tool because these solutions usually require specific devices that are not at hand of the common user in offices. There are other collaboration platforms based on video, speech and text, but VR allows users to share the same 3D space. In this 3D space there can be added functionalities or information that in a real-world environment would not be possible, something intrinsic to VR. This dissertation has produced a 3D framework that promotes nonverbal communication. It plays a fundamental role on human interaction and is mostly based on emotion. In the academia, confusion is known to influence learning gains if it is properly managed. We designed a study to evaluate how lexical, syntactic and n-gram features influence perceived confusion and found results (not statistically significant) that point that it is possible to build a machine learning model that can predict the level of confusion based on these features. This model was used to manipulate the script of a given presentation, and user feedback shows a trend that by manipulating these features and theoretically lowering the level of confusion on text not only drops the reported confusion, as it also increases reported sense of presence. Another contribution of this dissertation comes from the intrinsic features of a 3D environment where one can carry actions that in a real world are not possible. We designed an automatic adaption lighting system that reacts to the perceived user’s engagement. This hypothesis was partially refused as the results go against what we hypothesized but do not have statistical significance. Three lines of research may stem from this dissertation. First, there can be more complex features to train the machine learning model such as syntax trees. Also, on an Intelligent Tutoring System this could adjust the avatar’s speech in real-time if fed by a real-time confusion detector. When going for a social scenario, the set of basic emotions is well-adjusted and can enrich them. Facial emotion recognition can extend this effect to the avatar’s body to fuel this synchronization and increase the sense of presence. Finally, we based this dissertation on the premise of using ubiquitous devices, but with the rapid evolution of technology we should consider that new devices will be present on offices. This opens new possibilities for other modalities.A Realidade Virtual (RV) tem sido alvo de investigação extensa na academia e tem vindo a entrar na indústria. Empresas comuns não têm acesso a esta tecnologia como uma ferramenta de colaboração porque estas soluções necessitam de dispositivos específicos que não estão disponíveis para o utilizador comum em escritório. Existem outras plataformas de colaboração baseadas em vídeo, voz e texto, mas a RV permite partilhar o mesmo espaço 3D. Neste espaço podem existir funcionalidades ou informação adicionais que no mundo real não seria possível, algo intrínseco à RV. Esta dissertação produziu uma framework 3D que promove a comunicação não-verbal que tem um papel fundamental na interação humana e é principalmente baseada em emoção. Na academia é sabido que a confusão influencia os ganhos na aprendizagem quando gerida adequadamente. Desenhámos um estudo para avaliar como as características lexicais, sintáticas e n-gramas influenciam a confusão percecionada. Construímos e testámos um modelo de aprendizagem automática que prevê o nível de confusão baseado nestas características, produzindo resultados não estatisticamente significativos que suportam esta hipótese. Este modelo foi usado para manipular o texto de uma apresentação e o feedback dos utilizadores demonstra uma tendência na diminuição do nível de confusão reportada no texto e aumento da sensação de presença. Outra contribuição vem das características intrínsecas de um ambiente 3D onde se podem executar ações que no mundo real não seriam possíveis. Desenhámos um sistema automático de iluminação adaptativa que reage ao engagement percecionado do utilizador. Os resultados não suportam o que hipotetizámos mas não têm significância estatística, pelo que esta hipótese foi parcialmente rejeitada. Três linhas de investigação podem provir desta dissertação. Primeiro, criar características mais complexas para treinar o modelo de aprendizagem, tais como árvores de sintaxe. Além disso, num Intelligent Tutoring System este modelo poderá ajustar o discurso do avatar em tempo real, alimentado por um detetor de confusão. As emoções básicas ajustam-se a um cenário social e podem enriquecê-lo. A emoção expressada facialmente pode estender este efeito ao corpo do avatar para alimentar o sincronismo social e aumentar a sensação de presença. Finalmente, baseámo-nos em dispositivos ubíquos, mas com a rápida evolução da tecnologia, podemos considerar que novos dispositivos irão estar presentes em escritórios. Isto abre possibilidades para novas modalidades
    corecore