5,305 research outputs found

    Multimodal Observation and Interpretation of Subjects Engaged in Problem Solving

    Get PDF
    In this paper we present the first results of a pilot experiment in the capture and interpretation of multimodal signals of human experts engaged in solving challenging chess problems. Our goal is to investigate the extent to which observations of eye-gaze, posture, emotion and other physiological signals can be used to model the cognitive state of subjects, and to explore the integration of multiple sensor modalities to improve the reliability of detection of human displays of awareness and emotion. We observed chess players engaged in problems of increasing difficulty while recording their behavior. Such recordings can be used to estimate a participant's awareness of the current situation and to predict ability to respond effectively to challenging situations. Results show that a multimodal approach is more accurate than a unimodal one. By combining body posture, visual attention and emotion, the multimodal approach can reach up to 93% of accuracy when determining player's chess expertise while unimodal approach reaches 86%. Finally this experiment validates the use of our equipment as a general and reproducible tool for the study of participants engaged in screen-based interaction and/or problem solving

    The F@ Framework of Designing Awareness Mechanisms in Instant Messaging

    Get PDF
    This paper presents our research on awareness support in Instant Messaging (IM). The paper starts with a brief overview of empirical study of IM, using an online survey and face-to-face interviews to identify user needs for awareness support. The study identified a need for supporting four aspects of awareness, awareness of multiple concurrent conversations, conversational awareness, presence awareness of a group conversation, and visibility of moment-to-moment listeners and viewers. Based on the empirical study and existing research on awareness, we have developed the F@ (read as fat) framework of awareness. F@ comprises of the abstract level and the concrete level. The former includes an in-depth description of various awareness aspects in IM, whilst the latter utilises temporal logic to formalise fundamental time-related awareness aspects. F@ helps developers gain a better understanding of awareness and thereby design usable mechanisms to support awareness. Applying F@, we have designed several mechanisms to support various aspect of awareness in IM

    Assessing the Effectiveness of Automated Emotion Recognition in Adults and Children for Clinical Investigation

    Get PDF
    Recent success stories in automated object or face recognition, partly fuelled by deep learning artificial neural network (ANN) architectures, has led to the advancement of biometric research platforms and, to some extent, the resurrection of Artificial Intelligence (AI). In line with this general trend, inter-disciplinary approaches have taken place to automate the recognition of emotions in adults or children for the benefit of various applications such as identification of children emotions prior to a clinical investigation. Within this context, it turns out that automating emotion recognition is far from being straight forward with several challenges arising for both science(e.g., methodology underpinned by psychology) and technology (e.g., iMotions biometric research platform). In this paper, we present a methodology, experiment and interesting findings, which raise the following research questions for the recognition of emotions and attention in humans: a) adequacy of well-established techniques such as the International Affective Picture System (IAPS), b) adequacy of state-of-the-art biometric research platforms, c) the extent to which emotional responses may be different among children or adults. Our findings and first attempts to answer some of these research questions, are all based on a mixed sample of adults and children, who took part in the experiment resulting into a statistical analysis of numerous variables. These are related with, both automatically and interactively, captured responses of participants to a sample of IAPS pictures

    A Reproducible Study on Remote Heart Rate Measurement

    Get PDF
    This paper studies the problem of reproducible research in remote photoplethysmography (rPPG). Most of the work published in this domain is assessed on privately-owned databases, making it difficult to evaluate proposed algorithms in a standard and principled manner. As a consequence, we present a new, publicly available database containing a relatively large number of subjects recorded under two different lighting conditions. Also, three state-of-the-art rPPG algorithms from the literature were selected, implemented and released as open source free software. After a thorough, unbiased experimental evaluation in various settings, it is shown that none of the selected algorithms is precise enough to be used in a real-world scenario

    Online Measuring of Available Resources

    Get PDF
    This paper present a proposal for measuring available mental resources during the accomplishment of a task. Our proposal consists in measuring emotions provoked by perceived self-efficacy in the execution of the task. Self-efficacy is one of the most important factors that affect the resources that a person puts at the disposal of the execution of the task. When a person perceives that he/she is not being effective he/she will activate more resources to improve his performance. This self-efficacy will be reflected in the emotions that the person experiences. A good efficacy will provoke positive emotions and a bad efficacy negative emotions. The results of our study show that poor execution leads to negative emotions and psychophysiological activation as measured by pupil dilation. According to these results we propose that a possible method for measuring available resources during the execution of the task could be online measuring of emotions

    A semiotic perspective on webconferencing-supported language teaching.

    Get PDF
    International audienceIn webconferencing-supported teaching, the webcam mediates and organizes the pedagogical interaction. Previous research has provided a mixed picture of the use of the webcam: while it is seen as a useful medium to contribute to the personalization of the interlocutors’ relationship, help regulate interaction and facilitate learner comprehension and involvement, the limited access to visual cues provided by the webcam is felt as useless or even disruptive. This study examines the meaning-making potential of the webcam in pedagogical interactions from a semiotic perspective by exploring how trainee teachers use the affordances of the webcam to produce non-verbal cues that may be useful for mutual comprehension. The research context is a telecollaborative project where trainee teachers of French as a foreign language met for online sessions in French with undergraduate Business students at an Irish university. Using multimodal transcriptions of the interaction data from these sessions, screen shot data, and students’ post-course interviews, it was found, firstly, that whilst a head and shoulders framing shot was favoured by the trainee teachers, there does not appear to be an optimal framing choice for desktop videoconferencing among the three framing types identified. Secondly, there was a loss between the number of gestures performed by the trainee teachers and those that were visible for the students. Thirdly, when trainee teachers were able to coordinate the audio and kinesic modalities, communicative gestures that were framed, and held long enough to be perceived by the learners, were more likely to be valuable for mutual comprehension. The study highlights the need for trainee teachers to develop critical semiotic awareness to gain a better perception of the image they project of themselves in order to actualise the potential of the webcam and add more relief to their online teacher presence
    corecore