5 research outputs found

    FACETEQ; A novel platform for measuring emotion in VR

    Get PDF
    FaceTeq prototype v.05 is a wearable technology for measuring facial expressions and biometric responses for experimental studies in Virtual Reality. Developed by Emteq Ltd laboratory, FaceTeq can enable new avenues for virtual reality research through combination of high performance patented dry sensor technologies, proprietary algorithms and real-time data acquisition and streaming. FaceTeq project was founded with the aim to provide a human-centred additional tool for emotion expression, affective human-computer interaction and social virtual environments. The proposed poster will exhibit the hardware and its functionality

    Inside Out: Detecting Learners' Confusion to Improve Interactive Digital Learning Environments

    Get PDF
    Confusion is an emotion that is likely to occur while learning complex information. This emotion can be beneficial to learners in that it can foster engagement, leading to deeper understanding. However, if learners fail to resolve confusion, its effect can be detrimental to learning. Such detrimental learning experiences are particularly concerning within digital learning environments (DLEs), where a teacher is not physically present to monitor learner engagement and adapt the learning experience accordingly. However, with better information about a learner's emotion and behavior, it is possible to improve the design of interactive DLEs (IDLEs) not only in promoting productive confusion but also in preventing overwhelming confusion. This article reviews different methodological approaches for detecting confusion, such as self-report and behavioral and physiological measures, and discusses their implications within the theoretical framework of a zone of optimal confusion. The specificities of several methodologies and their potential application in IDLEs are discussed

    Using Facial EMG to Track Emotion During Language Comprehension: Past, Present, and Future

    Get PDF
    Beyond recognizing words, parsing sentences, building situation models, and other cognitive accomplishments, language comprehension always involves some degree of emotion too, with or without awareness. Language excites, bores, or otherwise moves us, and studying how it does so is crucial. This chapter examines the potential of facial electromyography (EMG) to study language-elicited emotion. After discussing the limitations of self-report measures, we examine various other tools to tap into emotion, and then zoom in on the electrophysiological recording of facial muscle activity. Surveying psycholinguistics, communication science, and other fields, we provide an exhaustive qualitative review of the relevant facial EMG research to date, exploring 55 affective comprehension experiments with single words, phrases, sentences, or larger pieces of discourse. We discuss the outcomes of this research, and evaluate the various practices, biases, and omissions in the field. We also present the fALC model, a new conceptual model that lays out the various potential sources of facial EMG activity during language comprehension. Our review suggests that facial EMG recording is a powerful tool for exploring the conscious as well as unconscious aspects of affective language comprehension. However, we also think it is time to take on a bit more complexity in this research field, by for example considering the possibility that multiple active generators can simultaneously contribute to an emotional facial expression, by studying how the communicator’s stance and social intention can give rise to emotion, and by studying facial expressions not just as indexes of inner states, but also as social tools that enrich everyday verbal interactions

    On the Recognition of Emotion from Physiological Data

    Get PDF
    This work encompasses several objectives, but is primarily concerned with an experiment where 33 participants were shown 32 slides in order to create ‗weakly induced emotions‘. Recordings of the participants‘ physiological state were taken as well as a self report of their emotional state. We then used an assortment of classifiers to predict emotional state from the recorded physiological signals, a process known as Physiological Pattern Recognition (PPR). We investigated techniques for recording, processing and extracting features from six different physiological signals: Electrocardiogram (ECG), Blood Volume Pulse (BVP), Galvanic Skin Response (GSR), Electromyography (EMG), for the corrugator muscle, skin temperature for the finger and respiratory rate. Improvements to the state of PPR emotion detection were made by allowing for 9 different weakly induced emotional states to be detected at nearly 65% accuracy. This is an improvement in the number of states readily detectable. The work presents many investigations into numerical feature extraction from physiological signals and has a chapter dedicated to collating and trialing facial electromyography techniques. There is also a hardware device we created to collect participant self reported emotional states which showed several improvements to experimental procedure
    corecore