11 research outputs found

    More than buttons on controllers: engaging social interactions in narrative VR games through social attitudes detection

    Get PDF
    People can understand how human interaction unfolds and can pinpoint social attitudes such as showing interest or social engagement with a conversational partner. However, summarising this with a set of rules is difficult, as our judgement is sometimes subtle and subconscious. Hence, it is challenging to program agents or non-player characters (NPCs) to react towards social signals appropriately, which is important for immersive narrative games in Virtual Reality (VR). We present a collaborative work between two game studios (Maze Theory and Dream Reality Interactive) and academia to develop an immersive machine learning (ML) pipeline for detecting social engagement. Here we introduce the motivation and the methodology of the immersive ML pipeline, then we cover the motivation for the industry-academia collaboration, how it progressed, the implications of joined work on the industry and reflective insights on the collaboration. Overall, we highlight the industry-academia collaborative work on an immersive ML pipeline for detecting social engagement. We demonstrate how creatives could use ML and VR to expand their ability to design more engaging commercial games

    Are You Still With Me? Continuous Engagement Assessment From a Robot's Point of View

    Get PDF
    Continuously measuring the engagement of users with a robot in a Human-Robot Interaction (HRI) setting paves the way toward in-situ reinforcement learning, improve metrics of interaction quality, and can guide interaction design and behavior optimization. However, engagement is often considered very multi-faceted and difficult to capture in a workable and generic computational model that can serve as an overall measure of engagement. Building upon the intuitive ways humans successfully can assess situation for a degree of engagement when they see it, we propose a novel regression model (utilizing CNN and LSTM networks) enabling robots to compute a single scalar engagement during interactions with humans from standard video streams, obtained from the point of view of an interacting robot. The model is based on a long-term dataset from an autonomous tour guide robot deployed in a public museum, with continuous annotation of a numeric engagement assessment by three independent coders. We show that this model not only can predict engagement very well in our own application domain but show its successful transfer to an entirely different dataset (with different tasks, environment, camera, robot and people). The trained model and the software is available to the HRI community, at https://github.com/LCAS/engagement_detector, as a tool to measure engagement in a variety of settings

    Fully Automatic Analysis of Engagement and Its Relationship to Personality in Human-Robot Interactions

    Get PDF
    Engagement is crucial to designing intelligent systems that can adapt to the characteristics of their users. This paper focuses on automatic analysis and classification of engagement based on humansā€™ and robotā€™s personality profiles in a triadic human-human-robot interaction setting. More explicitly, we present a study that involves two participants interacting with a humanoid robot, and investigate how participantsā€™ personalities can be used together with the robotā€™s personality to predict the engagement state of each participant. The fully automatic system is firstly trained to predict the Big Five personality traits of each participant by extracting individual and interpersonal features from their nonverbal behavioural cues. Secondly, the output of the personality prediction system is used as an input to the engagement classification system. Thirdly, we focus on the concept of ā€œgroup engagementā€, which we define as the collective engagement of the participants with the robot, and analyse the impact of similar and dissimilar personalities on the engagement classification. Our experimental results show that (i) using the automatically predicted personality labels for engagement classification yields an F-measure on par with using the manually annotated personality labels, demonstrating the effectiveness of the automatic personality prediction module proposed; (ii) using the individual and interpersonal features without utilising personality information is not sufficient for engagement classification, instead incorporating the participantsā€™ and robotā€™s personalities with individual/interpersonal features increases engagement classification performance; and (iii) the best classification performance is achieved when the participants and the robot are extroverted, while the worst results are obtained when all are introverted.This work was performed within the Labex SMART project (ANR-11-LABX-65) supported by French state funds managed by the ANR within the Investissements dā€™Avenir programme under reference ANR-11-IDEX-0004-02. The work of Oya Celiktutan and Hatice Gunes is also funded by the EPSRC under its IDEAS Factory Sandpits call on Digital Personhood (Grant Ref.: EP/L00416X/1).This is the author accepted manuscript. The final version is available from Institute of Electrical and Electronics Engineers via http://dx.doi.org/10.1109/ACCESS.2016.261452

    Automatic Context-Driven Inference of Engagement in HMI: A Survey

    Full text link
    An integral part of seamless human-human communication is engagement, the process by which two or more participants establish, maintain, and end their perceived connection. Therefore, to develop successful human-centered human-machine interaction applications, automatic engagement inference is one of the tasks required to achieve engaging interactions between humans and machines, and to make machines attuned to their users, hence enhancing user satisfaction and technology acceptance. Several factors contribute to engagement state inference, which include the interaction context and interactants' behaviours and identity. Indeed, engagement is a multi-faceted and multi-modal construct that requires high accuracy in the analysis and interpretation of contextual, verbal and non-verbal cues. Thus, the development of an automated and intelligent system that accomplishes this task has been proven to be challenging so far. This paper presents a comprehensive survey on previous work in engagement inference for human-machine interaction, entailing interdisciplinary definition, engagement components and factors, publicly available datasets, ground truth assessment, and most commonly used features and methods, serving as a guide for the development of future human-machine interaction interfaces with reliable context-aware engagement inference capability. An in-depth review across embodied and disembodied interaction modes, and an emphasis on the interaction context of which engagement perception modules are integrated sets apart the presented survey from existing surveys

    Interpreter-Assisted Investigative Interviews: Needs, Challenges and Quality

    Get PDF
    This thesis researches the yet to be fully explored dynamic of interpretation services needs and interpretation service optimisation (i.e., interview and interpretation quality assurance). The research began with an exploratory analysis of factors affecting interpreter-assisted investigative interviews and then took a subtle experimental swipe at factors affecting intelligibility and informativeness of translation. Recognition of the importance of the context in which interpretation currently takes place led to the addition of a wider range of approaches. This informed a broad review of relevant literature. The empirical work is presented as follows. First, is a Study Space Analysis (SSA) of policy-relevant research, which provides a base for determining the adequacy and depth of the existing body of knowledge. The results show that interpretation service needs and planning appear mildly or infrequently researched, and there exists little or no studies investigating police diversity effects on interpretation service needs. Also, studies investigating cognitive load, language and gender effects on interpreting accuracy are sparse. Finally, this study shows that the literature focuses on interpretation as a service for offenders rather than for victims and witnesses. The implications of this for social harmony are discussed. The second study concerns the optics of a police service that does not resemble the population policed. In a convenience sample of 104 ethnic minority individuals, the descriptive and thematic analysis indicates that police diversity tends to improve trust and impact the need for interpretation service. These findings bring to the fore the benign potency of language education. The third study explores the opinions of 66 International Law Enforcement Agency (ILEA) investigators and 40 interpreters on factors affecting investigative interviews involving the assistance of interpreters. Using descriptive and thematic analysis, it was shown that investigators plan only occasionally with interpreters. The seldom planning practise is found to occur because of investigatorā€™s role perception of interpreters and individualistic culture of investigators. Additionally, interpreter presence is observed to impact rapport building, and the effect of interruption is manageable with the right skills and experience combination. The fourth study uses a complex design to determine factors relevant to the intelligibility and informativeness of translations of witness accounts of a sample of audio depictions of non-violent offences. The study employed 240 aggregated ratings from 4 volunteer assessors of 60 textual interpretation of 15mins, 10mins and 5mins witness accounts using Tiselius (2009) 6-points Intelligibility and informativeness scale. Log-linear analysis revealed a surprising lack of consensus of assessments of intelligibility and informativeness across assessors, but judgements of informativeness relative to intelligibility within individual assessors appear coherent and consistent. Length of audio was not associated with intelligibility or informativeness. A small exploratory follow up to the study investigated what seemed to make translations unintelligible. The next study mapped the opinions of a sample of 51 expert interpreters with a range of experience about the perception of their work and its challenges. This shows consistency with existing literature and studies investigated in the thesis except for opinions on the role of police diversity which is found to increase trust and interpretation service needs

    Integrating Socially Assistive Robots into Language Tutoring Systems. A Computational Model for Scaffolding Young Children's Foreign Language Learning

    Get PDF
    Schodde T. Integrating Socially Assistive Robots into Language Tutoring Systems. A Computational Model for Scaffolding Young Children's Foreign Language Learning. Bielefeld: UniversitƤt Bielefeld; 2019.Language education is a global and important issue nowadays, especially for young children since their later educational success build on it. But learning a language is a complex task that is known to work best in a social interaction and, thus, personalized sessions tailored to the individual knowledge and needs of each child are needed to allow for teachers to optimally support them. However, this is often costly regarding time and personnel resources, which is one reasons why research of the past decades investigated the benefits of Intelligent Tutoring Systems (ITSs). But although ITSs can help out to provide individualized one-on-one tutoring interactions, they often lack of social support. This dissertation provides new insights on how a Socially Assistive Robot (SAR) can be employed as a part of an ITS, building a so-called "Socially Assistive Robot Tutoring System" (SARTS), to provide social support as well as to personalize and scaffold foreign language learning for young children in the age of 4-6 years. As basis for the SARTS a novel approach called A-BKT is presented, which allows to autonomously adapt the tutoring interaction to the children's individual knowledge and needs. The corresponding evaluation studies show that the A-BKT model can significantly increase student's learning gains and maintain a higher engagement during the tutoring interaction. This is partly due to the models ability to simulate the influences of potential actions on all dimensions of the learning interaction, i.e., the children's learning progress (cognitive learning), affective state, engagement (affective learning) and believed knowledge acquisition (perceived learning). This is particularly important since all dimensions are strongly interconnected and influence each other, for example, a low engagement can cause bad learning results although the learner is already quite proficient. However, this also yields the necessity to not only focus on the learner's cognitive learning but to equally support all dimensions with appropriate scaffolding actions. Therefore an extensive literature review, observational video recordings and expert interviews were conducted to find appropriate actions applicable for a SARTS to support each learning dimension. The subsequent evaluation study confirms that the developed scaffolding techniques are able to support young childrenā€™s learning process either by re-engaging them or by providing transparency to support their perception of the learning process and to reduce uncertainty. Finally, based on educated guesses derived from the previous studies, all identified strategies are integrated into the A-BKT model. The resulting model called ProTM is evaluated by simulating different learner types, which highlight its ability to autonomously adapt the tutoring interactions based on the learner's answers and provided dis-engagement cues. Summarized, this dissertation yields new insights into the field of SARTS to provide personalized foreign language learning interactions for young children, while also rising new important questions to be studied in the future

    Metrics to Evaluate Human Teaching Engagement From a Robot's Point of View

    Get PDF
    This thesis was motivated by a study of how robots can be taught by humans, with an emphasis on allowing persons without programming skills to teach robots. The focus of this thesis was to investigate what criteria could or should be used by a robot to evaluate whether a human teacher is (or potentially could be) a good teacher in robot learning by demonstration. In effect, choosing the teacher that can maximize the benefit to the robot using learning by imitation/demonstration. The study approached this topic by taking a technology snapshot in time to see if a representative example of research laboratory robot technology is capable of assessing teaching quality. With this snapshot, this study evaluated how humans observe teaching quality to attempt to establish measurement metrics that can be transferred as rules or algorithms that are beneficial from a robotā€™s point of view. To evaluate teaching quality, the study looked at the teacher-student relationship from a human-human interaction perspective. Two factors were considered important in defining a good teacher: engagement and immediacy. The study gathered more literature reviews relating to further detailed elements of engagement and immediacy. The study also tried to link physical effort as a possible metric that could be used to measure the level of engagement of the teachers. An investigatory experiment was conducted to evaluate which modality the participants prefer to employ in teaching a robot if the robot can be taught using voice, gesture demonstration, or physical manipulation. The findings from this experiment suggested that the participants appeared to have no preference in terms of human effort for completing the task. However, there was a significant difference in human enjoyment preferences of input modality and a marginal difference in the robotā€™s perceived ability to imitate. A main experiment was conducted to study the detailed elements that might be used by a robot in identifying a ā€œgoodā€ teacher. The main experiment was conducted in two subexperiments. The first part recorded the teacherā€™s activities and the second part analysed how humans evaluate the perception of engagement when assessing another human teaching a robot. The results from the main experiment suggested that in human teaching of a robot (human-robot interaction), humans (the evaluators) also look for some immediacy cues that happen in human-human interaction for evaluating the engagement

    ENGAGEMENT RECOGNITION WITHIN ROBOT-ASSISTED AUTISM THERAPY

    Get PDF
    Autism is a neurodevelopmental condition typically diagnosed in early childhood, which is characterized by challenges in using language and understanding abstract concepts, effective communication, and building social relationships. The utilization of social robots in autism therapy represents a significant area of research. An increasing number of studies explore the use of social robots as mediators between therapists and children diagnosed with autism. Assessing a childā€™s engagement can enhance the effectiveness of robot-assisted interventions while also providing an objective metric for later analysis. The thesis begins with a comprehensive multiple-session study involving 11 children diagnosed with autism and Attention Deficit Hyperactivity Disorder (ADHD). This study employs multi-purposeful robot activities designed to target various aspects of autism. The study yields both quantitative and qualitative findings based on four behavioural measures that were obtained from video recordings of the sessions. Statistical analysis reveals that adaptive therapy provides a longer engagement duration as compared to non-adaptive therapy sessions. Engagement is a key element in evaluating autism therapy sessions that are needed for acquiring knowledge and practising new skills necessary for social and cognitive development. With the aim to create an engagement recognition model, this research work also involves the manual labelling of collected videos to generate a QAMQOR dataset. This dataset comprises 194 therapy sessions, spanning over 48 hours of video recordings. Additionally, it includes demographic information for 34 children diagnosed with ASD. It is important to note that videos of 23 children with autism were collected from previous records. The QAMQOR dataset was evaluated using standard machine learning and deep learning approaches. However, the development of an accurate engagement recognition model remains challenging due to the unique personal characteristics of each individual with autism. In order to address this challenge and improve recognition accuracy, this PhD work also explores a data-driven model using transfer learning techniques. Our study contributes to addressing the challenges faced by machine learning in recognizing engagement among children with autism, such as diverse engagement activities, multimodal raw data, and the resources and time required for data collection. This research work contributes to the growing field of using social robots in autism therapy by illuminating an understanding of the importance of adaptive therapy and providing valuable insights into engagement recognition. The findings serve as a foundation for further advancements in personalized and effective robot-assisted interventions for individuals with autism
    corecore