19 research outputs found

    Affect-driven Engagement Measurement from Videos

    Full text link
    In education and intervention programs, person's engagement has been identified as a major factor in successful program completion. Automatic measurement of person's engagement provides useful information for instructors to meet program objectives and individualize program delivery. In this paper, we present a novel approach for video-based engagement measurement in virtual learning programs. We propose to use affect states, continuous values of valence and arousal extracted from consecutive video frames, along with a new latent affective feature vector and behavioral features for engagement measurement. Deep learning-based temporal, and traditional machine-learning-based non-temporal models are trained and validated on frame-level, and video-level features, respectively. In addition to the conventional centralized learning, we also implement the proposed method in a decentralized federated learning setting and study the effect of model personalization in engagement measurement. We evaluated the performance of the proposed method on the only two publicly available video engagement measurement datasets, DAiSEE and EmotiW, containing videos of students in online learning programs. Our experiments show a state-of-the-art engagement level classification accuracy of 63.3% and correctly classifying disengagement videos in the DAiSEE dataset and a regression mean squared error of 0.0673 on the EmotiW dataset. Our ablation study shows the effectiveness of incorporating affect states in engagement measurement. We interpret the findings from the experimental results based on psychology concepts in the field of engagement.Comment: 13 pages, 8 figures, 7 table

    iFocus: A Framework for Non-intrusive Assessment of Student Attention Level in Classrooms

    Get PDF
    The process of learning is not merely determined by what the instructor teaches, but also by how the student receives that information. An attentive student will naturally be more open to obtaining knowledge than a bored or frustrated student. In recent years, tools such as skin temperature measurements and body posture calculations have been developed for the purpose of determining a student\u27s affect, or emotional state of mind. However, measuring eye-gaze data is particularly noteworthy in that it can collect measurements non-intrusively, while also being relatively simple to set up and use. This paper details how data obtained from such an eye-tracker can be used to predict a student\u27s attention as a measure of affect over the course of a class. From this research, an accuracy of 77% was achieved using the Extreme Gradient Boosting technique of machine learning. The outcome indicates that eye-gaze can be indeed used as a basis for constructing a predictive model

    Millaisia ajatuksia kaunokirjallisuuden lukeminen herÀttÀÀ? : Lukemisen aikaisten ajatustyyppien yhteys lukijan silmÀnliikkeisiin ja tekstin emotionaalisuuden vaikutus ajatustyyppien esiintymiseen

    Get PDF
    Tutkimuksessa selvitettiin, millaisia ajatustyyppejÀ kaunokirjallisuuden lukemisen aikana herÀÀ ja ovatko ne yhteydessÀ lukemisen sujuvuuteen. LisÀksi selvitettiin, esiintyvÀtkö ajatustyypit enemmÀn neutraalien vai emotionaalisten kappaleiden lukemisen jÀlkeen. Osallistujat (N=70) lukivat kaunokirjallisesta teoksesta poimitun katkelman, joka sisÀlsi 30 kohdekappaletta, joista 10 arvioitiin emotionaaliselta valenssiltaan neutraaleiksi, 10 positiivisiksi ja 10 negatiivisiksi. Lukijoiden silmÀnliikkeet rekisteröitiin ja jokaisen kohdekappaleen jÀlkeen heitÀ pyydettiin lisÀksi arvioimaan senhetkisten ajatustensa piirteitÀ. Vastaukset analysoitiin pÀÀkomponenttianalyysillÀ ajatustyyppien löytÀmiseksi. Lineaarisilla ja yleisillÀ lineaarisilla sekamalleilla tarkasteltiin, ennustaako kukin ajatustyyppi yksin tai yhdessÀ sanan pituuden tai yleisyyden kanssa sanan ohitustodennÀköisyyttÀ ensimmÀisellÀ lukukerralla, ensikatseluaikaa, ensimmÀisen lukukerran kestoa ja kokonaiskatseluaikaa. Lopuksi vertailtiin lineaarisilla sekamalleilla ajatustyyppien esiintymistÀ neutraalien ja emotionaalisten kappaleiden lukemisen jÀlkeen. Löydettiin viisi ajatustyyppiÀ, joiden nimeksi annettiin immersio, omat murheet, vaeltelu tulevassa, tahdonalaiset ja sanalliset sekÀ toiset menneisyydessÀ. Immersio, tahdonalaiset ja sanalliset ja toiset menneisyydessÀ ennustivat silmÀnliikkeiden suurempaa herkkyyttÀ sanan pituudelle ja yleisyydelle. LisÀksi immersio oli yhteydessÀ pienempÀÀn sanan ohitustodennÀköisyyteen ensimmÀisellÀ lukukerralla ja lyhyempÀÀn sanan kokonaiskatseluaikaan, kun taas tahdonalaiset ja sanalliset -ajatustyyppi oli yhteydessÀ pidempÀÀn sanan ensimmÀisen lukukerran kestoon. Omat murheet ja vaeltelu tulevassa olivat muista ajatustyypeistÀ poiketen yhteydessÀ suurempaan ohitettujen sanojen mÀÀrÀÀn ensimmÀisellÀ lukukerralla, minkÀ lisÀksi omat murheet olivat yhteydessÀ pidempÀÀn sanan ensikatseluaikaan ja ensimmÀisen lukukerran kestoon. LisÀksi havaittiin, ettÀ immersiota esiintyi enemmÀn positiivisten kuin neutraalien kappaleiden lukemisen jÀlkeen, kun taas vaeltelua tulevassa ja tahdonalaisia ja sanallisia ajatuksia esiintyi eniten neutraalien kappaleiden lukemisen jÀlkeen. Kappaleen emotionaalisuudella ei ollut vaikutusta omien murheiden ja toiset menneisyydessÀ -ajatustyypin esiintymiseen. Tulokset tukevat aiempia tutkimustuloksia ja hypoteeseja, jotka yhdistÀvÀt immersion sujuvaan lukemiseen ja ajatuksen harhailun lukemisen hÀiriintymiseen. LisÀksi ne viittaavat siihen, ettÀ tekstin emotionaalisuudella on vaikutusta tiettyjen muttei kaikkien lukemisen aikaisten ajatusten esiintymiseen. Tulosten perusteella ei kuitenkaan voida pÀÀtellÀ, millaisia ajatustyyppejÀ kaunokirjallisuuden lukemisen aikana yleisesti herÀÀ, ovatko ajatustyypit yhteydessÀ tilannemallin rakentumiseen luetusta tekstistÀ tai miten esimerkiksi virittÀvÀmmÀt emotionaaliset kohdat vaikuttavat ajatustyyppien esiintymiseen

    Multimodal Motivation Modelling and Computing towards Motivationally Intelligent ELearning Systems

    Get PDF
    Persistent motivation to engage in e-learning systems is essential for users’ learning performance. Learners' motivation is traditionally assessed using subjective, self-reported data which is time-consuming and interruptive to their learning process. To address this issue, this paper proposes a novel framework for multimodal assessment of learners’ motivation in e-learning environments to inform intelligent e-learning systems that can facilitate dynamic, context-aware, and personalized services or interventions to maintain learners’ motivation during use. The applicability of the framework was evaluated in an empirical study in which we combined eye tracking and electroencephalogram (EEG) sensors to produce a multimodal dataset. The dataset was then processed and used to develop a machine learning classifier for motivation assessment by predicting the levels of a range of motivational factors, which represented the multiple dimensions of motivation. We investigated the performance of the machine learning classifier and the most and least accurately predicted motivational factors. We also assessed the contribution of different EEG and eye gaze features to motivation assessment. Our study has revealed valuable insights for the role played by brain activities and eye movements on predicting the levels of different motivational factors. Initial results using logistic regression classifier have achieved significant predictive power for all the motivational factors studied, with accuracy of between 68.1% and 92.8%. The present work has demonstrated the applicability of the proposed framework for multimodal motivation assessment which will inspire future research towards motivationally intelligent e-learning systems

    Tutor In-sight: Guiding and Visualizing Students Attention with Mixed Reality Avatar Presentation Tools

    Get PDF
    Remote conferencing systems are increasingly used to supplement or even replace in-person teaching. However, prevailing conferencing systems restrict the teacher’s representation to a webcam live-stream, hamper the teacher’s use of body-language, and result in students’ decreased sense of co-presence and participation. While Virtual Reality (VR) systems may increase student engagement, the teacher may not have the time or expertise to conduct the lecture in VR. To address this issue and bridge the requirements between students and teachers, we have developed Tutor In-sight, a Mixed Reality (MR) avatar augmented into the student’s workspace based on four design requirements derived from the existing literature, namely: integrated virtual with physical space, improved teacher’s co-presence through avatar, direct attention with auto-generated body language, and usable workfow for teachers. Two user studies were conducted from the perspectives of students and teachers to determine the advantages of Tutor In-sight in comparison to two existing conferencing systems, Zoom (video-based) and Mozilla Hubs (VR-based). The participants of both studies favoured Tutor In-sight. Among others, this main fnding indicates that Tutor Insight satisfed the needs of both teachers and students. In addition, the participants’ feedback was used to empirically determine the four main teacher requirements and the four main student requirements in order to improve the future design of MR educational tools

    Automated Gaze-Based Mind Wandering Detection during Computerized Learning in Classrooms

    Get PDF
    We investigate the use of commercial off-the-shelf (COTS) eye-trackers to automatically detect mind wandering—a phenomenon involving a shift in attention from task-related to task-unrelated thoughts—during computerized learning. Study 1 (N = 135 high-school students) tested the feasibility of COTS eye tracking while students learn biology with an intelligent tutoring system called GuruTutor in their classroom. We could successfully track eye gaze in 75% (both eyes tracked) and 95% (one eye tracked) of the cases for 85% of the sessions where gaze was successfully recorded. In Study 2, we used this data to build automated student-independent detectors of mind wandering, obtaining accuracies (mind wandering F1 = 0.59) substantially better than chance (F1 = 0.24). Study 3 investigated context-generalizability of mind wandering detectors, finding that models trained on data collected in a controlled laboratory more successfully generalized to the classroom than the reverse. Study 4 investigated gaze- and video- based mind wandering detection, finding that gaze-based detection was superior and multimodal detection yielded an improvement in limited circumstances. We tested live mind wandering detection on a new sample of 39 students in Study 5 and found that detection accuracy (mind wandering F1 = 0.40) was considerably above chance (F1 = 0.24), albeit lower than offline detection accuracy from Study 1 (F1 = 0.59), a finding attributable to handling of missing data. We discuss our next steps towards developing gaze-based attention-aware learning technologies to increase engagement and learning by combating mind wandering in classroom contexts

    The scientific study of passive thinking: Methods of mind wandering research

    Get PDF
    The science of mind wandering has rapidly expanded over the past 20 years. During this boom, mind wandering researchers have relied on self-report methods, where participants rate whether their minds were wandering. This is not an historical quirk. Rather, we argue that self-report is indispensable for researchers who study passive phenomena like mind wandering. We consider purportedly “objective” methods that measure mind wandering with eye tracking and machine learning. These measures are validated in terms of how well they predict self-reports, which means that purportedly objective measures of mind wandering retain a subjective core. Mind wandering science cannot break from the cycle of self-report. Skeptics about self-report might conclude that mind wandering science has methodological foundations of sand. We take a rather more optimistic view. We present empirical and philosophical reasons to be confident in self-reports about mind wandering. Empirically, these self-reports are remarkably consistent in their contents and behavioral and neural correlates. Philosophically, self-reports are consistent with our best theories about the function of mind wandering. We argue that this triangulation gives us reason to trust both theory and method

    Computer detection of spatial visualization in a location-based task

    Get PDF
    An untapped area of productivity gains hinges on automatic detection of user cognitive characteristics. One such characteristic, spatial visualization ability, relates to users’ computer performance. In this dissertation, we describe a novel, behavior-based, spatial visualization detection technique. The technique does not depend on sensors or knowledge of the environment and can be adopted on generic computers. In a Census Bureau location-based address verification task, detection rates exceeded 80% and approached 90%
    corecore