7 research outputs found

    Robust Modeling of Epistemic Mental States

    Full text link
    This work identifies and advances some research challenges in the analysis of facial features and their temporal dynamics with epistemic mental states in dyadic conversations. Epistemic states are: Agreement, Concentration, Thoughtful, Certain, and Interest. In this paper, we perform a number of statistical analyses and simulations to identify the relationship between facial features and epistemic states. Non-linear relations are found to be more prevalent, while temporal features derived from original facial features have demonstrated a strong correlation with intensity changes. Then, we propose a novel prediction framework that takes facial features and their nonlinear relation scores as input and predict different epistemic states in videos. The prediction of epistemic states is boosted when the classification of emotion changing regions such as rising, falling, or steady-state are incorporated with the temporal features. The proposed predictive models can predict the epistemic states with significantly improved accuracy: correlation coefficient (CoERR) for Agreement is 0.827, for Concentration 0.901, for Thoughtful 0.794, for Certain 0.854, and for Interest 0.913.Comment: Accepted for Publication in Multimedia Tools and Application, Special Issue: Socio-Affective Technologie

    Emotion Recognition for Intelligent Tutoring

    Get PDF
    Abstract. Individual teaching has been considered as the most successful educational form since ancient times. This form still continues its existence nowadays within intelligent systems intended to provide adapted tutoring for each student. Although, recent research has shown that emotions can affect student's learning, adaptation skills of tutoring systems are still imperfect due to weak emotional intelligence. To enhance ongoing research related to the improvement of the tutoring adaptation based on both student's knowledge and emotional state, the paper presents an analysis of emotion recognition methods used in recent developments. Study reveals that sensor-lite approach can serve as a solution to problems related to emotion identification accuracy. To provide ground-truth data for emotional state, we have explored and implemented a selfassessment method

    iFocus: A Framework for Non-intrusive Assessment of Student Attention Level in Classrooms

    Get PDF
    The process of learning is not merely determined by what the instructor teaches, but also by how the student receives that information. An attentive student will naturally be more open to obtaining knowledge than a bored or frustrated student. In recent years, tools such as skin temperature measurements and body posture calculations have been developed for the purpose of determining a student\u27s affect, or emotional state of mind. However, measuring eye-gaze data is particularly noteworthy in that it can collect measurements non-intrusively, while also being relatively simple to set up and use. This paper details how data obtained from such an eye-tracker can be used to predict a student\u27s attention as a measure of affect over the course of a class. From this research, an accuracy of 77% was achieved using the Extreme Gradient Boosting technique of machine learning. The outcome indicates that eye-gaze can be indeed used as a basis for constructing a predictive model

    Affective Computational Model to Extract Natural Affective States of Students with Asperger Syndrome (AS) in Computer-based Learning Environment

    Get PDF
    This study was inspired by looking at the central role of emotion in the learning process, its impact on students’ performance; as well as the lack of affective computing models to detect and infer affective-cognitive states in real time for students with and without Asperger Syndrome (AS). This model overcomes gaps in other models that were designed for people with autism, which needed the use of sensors or physiological instrumentations to collect data. The model uses a webcam to capture students’ affective-cognitive states of confidence, uncertainty, engagement, anxiety, and boredom. These states have a dominant effect on the learning process. The model was trained and tested on a natural-spontaneous affective dataset for students with and without AS, which was collected for this purpose. The dataset was collected in an uncontrolled environment and included variations in culture, ethnicity, gender, facial and hairstyle, head movement, talking, glasses, illumination changes and background variation. The model structure used deep learning (DL) techniques like convolutional neural network (CNN) and long short-term memory (LSTM). DL is the-state-of-art tool that used to reduce data dimensionality and capturing non-linear complex features from simpler representations. The affective model provide reliable results with accuracy 90.06%. This model is the first model to detected affective states for adult students with AS without physiological or wearable instruments. For the first time, the occlusions in this model, like hand over face or head were considered an important indicator for affective states like boredom, anxiety, and uncertainty. These occlusions have been ignored in most other affective models. The essential information channels in this model are facial expressions, head movement, and eye gaze. The model can serve as an aided-technology for tutors to monitor and detect the behaviors of all students at the same time and help in predicting negative affective states during learning process

    Correlação entre traços de personalidade e emoções de realização com engajamento e desempenho em MOOCS

    Get PDF
    Cursos Online Massivos e Abertos (MOOCs) lidam com variados desafios, e talvez um dos mais importantes seja que após a matrícula, é difícil para muitos estudantes manterem-se engajados e completarem os cursos. Esse problema pode estar ligado a vários elementos, motivando esta pesquisa a olhar fatores subjetivos. Diante disso, teve-se como objetivo de estudo levantar os traços de personalidade e emoções de realização de estudantes de MOOCs, e observar se eles se correlacionam com o engajamento (medido como acesso aos materiais dos cursos) e o sucesso acadêmico (desempenho e conclusão). Tratou-se de um estudo de casos múltiplos, sendo investigado três MOOCs da plataforma Lúmina. Os traços de personalidade foram avaliados com o uso do “Inventário de Personalidade de Dez Itens” (TIPI). As emoções de realização foram reconhecidas de duas maneiras: o uso do “Questionário das Emoções de Realização” (AEQ), e de algoritmos de Análise de Sentimentos nos fóruns dos cursos, com apoio de um dicionário customizado para o português, que teve como entrada a “Lista de Adjetivos de Emoções de Realização” (AEAL). Análises correlacionais foram realizadas, e para elucidar elementos não esclarecidos por meio das técnicas quantitativas, aplicou-se um questionário aberto com os estudantes. As principais descobertas indicaram que os traços de agradabilidade, estabilidade emocional e abertura à experiência apresentaram correlações estatisticamente significativas, porém fracas com engajamento. Traços de personalidade positivos (abertura à experiência e conscienciosidade) correlacionaram fracamente com emoções de realização positivas (prazer em aprender e orgulho). Os resultados do AEQ evidenciaram prevalência de emoções positivas, contudo não foi encontrada correlação significativa com o desempenho e conclusão dos cursos. Nos fóruns, o AEAL identificou maiores ocorrências de emoções de realização (positivas e negativas), mostrando maior capacidade para capturar emoções de realização vivenciadas por estudantes. O questionário aberto esclareceu que a conclusão dos cursos esteve atrelada a diferentes elementos (e.g., gostar do tema, o interesse em novos conhecimentos, a qualidade das aulas, didática do professor, o certificado), o que justificou as emoções coletadas pelo AEQ não terem apresentado efeitos no sucesso acadêmico. Os estudantes também informaram que seus traços de personalidade atuaram positivamente no desempenho e conclusão dos cursos.Massive Open Online Courses (MOOCs) deal with a variety of challenges, and perhaps one of the most important is that many students do not remain engaged and complete courses after enrollment. This problem may be related to several factors, motivating this research to investigate subjective factors. Therefore, this study aimed to identify the personality traits and Emotions of Achievement of MOOCs students and determine whether they correlate with engagement (measured as access to course materials) and academic success (performance and completion). This is a multiple case study, in which three MOOCs of the Lúmina platform were investigated. Personality traits were assessed using the “Ten Item Personality Inventory” (TIPI). Emotions of achievement were identified in two ways: by using the “Achievement Emotions Questionnaire” (AEQ) and the Sentiment Analysis algorithms in the course forums. Sentiment analysis was supported by a dictionary customized for Portuguese, which had as input the “Achievement Emotions Adjective List" (AEAL). Correlational analyses were performed and an open questionnaire was administered to the students to elucidate elements that were not clarified through quantitative techniques. Agreeableness, emotional stability, and openness to experience traits showed statistically significant but weak correlations with engagement. Positive personality traits (openness to experience and conscientiousness) correlated weakly with positive achievement emotions (pleasure in learning and pride). Although the AEQ results showed a prevalence of positive emotions, no significant correlation was found with the performance and completion of the courses. The AEAL showed more emotions of accomplishment (positive and negative) in the forums, being more capable to capture emotions of accomplishment experienced by students. The open questionnaire revealed that the completion of the courses was related to different factors (eg, liking the topic, interest in new knowledge, the quality of the classes, the teacher's didactic, the certificate), which explained the lack of effect of the emotions collected by the AEQ on academic success. Moreover, students reported that their personality traits had a positive effect on course performance and completion

    Presentation adaptation for multimodal interface systems: Three essays on the effectiveness of user-centric content and modality adaptation

    Full text link
    The use of devices is becoming increasingly ubiquitous and the contexts of their users more and more dynamic. This often leads to situations where one communication channel is rather impractical. Text-based communication is particularly inconvenient when the hands are already occupied with another task. Audio messages induce privacy risks and may disturb other people if used in public spaces. Multimodal interfaces thus offer users the flexibility to choose between multiple interaction modalities. While the choice of a suitable input modality lies in the hands of the users, they may also require output in a different modality depending on their situation. To adapt the output of a system to a particular context, rules are needed that specify how information should be presented given the users’ situation and state. Therefore, this thesis tests three adaptation rules that – based on observations from cognitive science – have the potential to improve the interaction with an application by adapting the presented content or its modality. Following modality alignment, the output (audio versus visual) of a smart home display is matched with the user’s input (spoken versus manual) to the system. Experimental evaluations reveal that preferences for an input modality are initially too unstable to infer a clear preference for either interaction modality. Thus, the data shows no clear relation between the users’ modality choice for the first interaction and their attitude towards output in different modalities. To apply multimodal redundancy, information is displayed in multiple modalities. An application of the rule in a video conference reveals that captions can significantly reduce confusion. However, the effect is limited to confusion resulting from language barriers, whereas contradictory auditory reports leave the participants in a state of confusion independent of whether captions are available or not. We therefore suggest to activate captions only when the facial expression of a user – captured by action units, expressions of positive or negative affect, and a reduced blink rate – implies that the captions effectively improve comprehension. Content filtering in movies puts the character into the spotlight that – according to the distribution of their gaze to elements in the previous scene – the users prefer. If preferences are predicted with machine learning classifiers, this has the potential to significantly improve the user’ involvement compared to scenes of elements that the user does not prefer. Focused attention is additionally higher compared to scenes in which multiple characters take a lead role
    corecore