143 research outputs found

    Sensing, interpreting, and anticipating human social behaviour in the real world

    Get PDF
    Low-level nonverbal social signals like glances, utterances, facial expressions and body language are central to human communicative situations and have been shown to be connected to important high-level constructs, such as emotions, turn-taking, rapport, or leadership. A prerequisite for the creation of social machines that are able to support humans in e.g. education, psychotherapy, or human resources is the ability to automatically sense, interpret, and anticipate human nonverbal behaviour. While promising results have been shown in controlled settings, automatically analysing unconstrained situations, e.g. in daily-life settings, remains challenging. Furthermore, anticipation of nonverbal behaviour in social situations is still largely unexplored. The goal of this thesis is to move closer to the vision of social machines in the real world. It makes fundamental contributions along the three dimensions of sensing, interpreting and anticipating nonverbal behaviour in social interactions. First, robust recognition of low-level nonverbal behaviour lays the groundwork for all further analysis steps. Advancing human visual behaviour sensing is especially relevant as the current state of the art is still not satisfactory in many daily-life situations. While many social interactions take place in groups, current methods for unsupervised eye contact detection can only handle dyadic interactions. We propose a novel unsupervised method for multi-person eye contact detection by exploiting the connection between gaze and speaking turns. Furthermore, we make use of mobile device engagement to address the problem of calibration drift that occurs in daily-life usage of mobile eye trackers. Second, we improve the interpretation of social signals in terms of higher level social behaviours. In particular, we propose the first dataset and method for emotion recognition from bodily expressions of freely moving, unaugmented dyads. Furthermore, we are the first to study low rapport detection in group interactions, as well as investigating a cross-dataset evaluation setting for the emergent leadership detection task. Third, human visual behaviour is special because it functions as a social signal and also determines what a person is seeing at a given moment in time. Being able to anticipate human gaze opens up the possibility for machines to more seamlessly share attention with humans, or to intervene in a timely manner if humans are about to overlook important aspects of the environment. We are the first to propose methods for the anticipation of eye contact in dyadic conversations, as well as in the context of mobile device interactions during daily life, thereby paving the way for interfaces that are able to proactively intervene and support interacting humans.Blick, Gesichtsausdrücke, Körpersprache, oder Prosodie spielen als nonverbale Signale eine zentrale Rolle in menschlicher Kommunikation. Sie wurden durch vielzählige Studien mit wichtigen Konzepten wie Emotionen, Sprecherwechsel, Führung, oder der Qualität des Verhältnisses zwischen zwei Personen in Verbindung gebracht. Damit Menschen effektiv während ihres täglichen sozialen Lebens von Maschinen unterstützt werden können, sind automatische Methoden zur Erkennung, Interpretation, und Antizipation von nonverbalem Verhalten notwendig. Obwohl die bisherige Forschung in kontrollierten Studien zu ermutigenden Ergebnissen gekommen ist, bleibt die automatische Analyse nonverbalen Verhaltens in weniger kontrollierten Situationen eine Herausforderung. Darüber hinaus existieren kaum Untersuchungen zur Antizipation von nonverbalem Verhalten in sozialen Situationen. Das Ziel dieser Arbeit ist, die Vision vom automatischen Verstehen sozialer Situationen ein Stück weit mehr Realität werden zu lassen. Diese Arbeit liefert wichtige Beiträge zur autmatischen Erkennung menschlichen Blickverhaltens in alltäglichen Situationen. Obwohl viele soziale Interaktionen in Gruppen stattfinden, existieren unüberwachte Methoden zur Augenkontakterkennung bisher lediglich für dyadische Interaktionen. Wir stellen einen neuen Ansatz zur Augenkontakterkennung in Gruppen vor, welcher ohne manuelle Annotationen auskommt, indem er sich den statistischen Zusammenhang zwischen Blick- und Sprechverhalten zu Nutze macht. Tägliche Aktivitäten sind eine Herausforderung für Geräte zur mobile Augenbewegungsmessung, da Verschiebungen dieser Geräte zur Verschlechterung ihrer Kalibrierung führen können. In dieser Arbeit verwenden wir Nutzerverhalten an mobilen Endgeräten, um den Effekt solcher Verschiebungen zu korrigieren. Neben der Erkennung verbessert diese Arbeit auch die Interpretation sozialer Signale. Wir veröffentlichen den ersten Datensatz sowie die erste Methode zur Emotionserkennung in dyadischen Interaktionen ohne den Einsatz spezialisierter Ausrüstung. Außerdem stellen wir die erste Studie zur automatischen Erkennung mangelnder Verbundenheit in Gruppeninteraktionen vor, und führen die erste datensatzübergreifende Evaluierung zur Detektion von sich entwickelndem Führungsverhalten durch. Zum Abschluss der Arbeit präsentieren wir die ersten Ansätze zur Antizipation von Blickverhalten in sozialen Interaktionen. Blickverhalten hat die besondere Eigenschaft, dass es sowohl als soziales Signal als auch der Ausrichtung der visuellen Wahrnehmung dient. Somit eröffnet die Fähigkeit zur Antizipation von Blickverhalten Maschinen die Möglichkeit, sich sowohl nahtloser in soziale Interaktionen einzufügen, als auch Menschen zu warnen, wenn diese Gefahr laufen wichtige Aspekte der Umgebung zu übersehen. Wir präsentieren Methoden zur Antizipation von Blickverhalten im Kontext der Interaktion mit mobilen Endgeräten während täglicher Aktivitäten, als auch während dyadischer Interaktionen mittels Videotelefonie

    Analysis and enhancement of interpersonal coordination using inertial measurement unit solutions

    Get PDF
    Die heutigen mobilen Kommunikationstechnologien haben den Umfang der verbalen und textbasierten Kommunikation mit anderen Menschen, sozialen Robotern und künstlicher Intelligenz erhöht. Auf der anderen Seite reduzieren diese Technologien die nonverbale und die direkte persönliche Kommunikation, was zu einer gesellschaftlichen Thematik geworden ist, weil die Verringerung der direkten persönlichen Interaktionen eine angemessene Wahrnehmung sozialer und umgebungsbedingter Reizmuster erschweren und die Entwicklung allgemeiner sozialer Fähigkeiten bremsen könnte. Wissenschaftler haben aktuell die Bedeutung nonverbaler zwischenmenschlicher Aktivitäten als soziale Fähigkeiten untersucht, indem sie menschliche Verhaltensmuster in Zusammenhang mit den jeweilgen neurophysiologischen Aktivierungsmustern analzsiert haben. Solche Querschnittsansätze werden auch im Forschungsprojekt der Europäischen Union "Socializing sensori-motor contingencies" (socSMCs) verfolgt, das darauf abzielt, die Leistungsfähigkeit sozialer Roboter zu verbessern und Autismus-Spektrumsstörungen (ASD) adäquat zu behandeln. In diesem Zusammenhang ist die Modellierung und das Benchmarking des Sozialverhaltens gesunder Menschen eine Grundlage für theorieorientierte und experimentelle Studien zum weiterführenden Verständnis und zur Unterstützung interpersoneller Koordination. In diesem Zusammenhang wurden zwei verschiedene empirische Kategorien in Abhängigkeit von der Entfernung der Interagierenden zueinander vorgeschlagen: distale vs. proximale Interaktionssettings, da sich die Struktur der beteiligten kognitiven Systeme zwischen den Kategorien ändert und sich die Ebene der erwachsenden socSMCs verschiebt. Da diese Dissertation im Rahmen des socSMCs-Projekts entstanden ist, wurden Interaktionssettings für beide Kategorien (distal und proximal) entwickelt. Zudem wurden Ein-Sensor-Lösungen zur Reduzierung des Messaufwands (und auch der Kosten) entwickelt, um eine Messung ausgesuchter Verhaltensparameter bei einer Vielzahl von Menschen und sozialen Interaktionen zu ermöglichen. Zunächst wurden Algorithmen für eine kopfgetragene Trägheitsmesseinheit (H-IMU) zur Messung der menschlichen Kinematik als eine Ein-Sensor-Lösung entwickelt. Die Ergebnisse bestätigten, dass die H-IMU die eigenen Gangparameter unabhängig voneinander allein auf Basis der Kopfkinematik messen kann. Zweitens wurden—als ein distales socSMC-Setting—die interpersonellen Kopplungen mit einem Bezug auf drei interagierende Merkmale von „Übereinstimmung“ (engl.: rapport) behandelt: Positivität, gegenseitige Aufmerksamkeit und Koordination. Die H-IMUs überwachten bestimmte soziale Verhaltensereignisse, die sich auf die Kinematik der Kopforientierung und Oszillation während des Gehens und Sprechens stützen, so dass der Grad der Übereinstimmung geschätzt werden konnte. Schließlich belegten die Ergebnisse einer experimentellen Studie, die zu einer kollaborativen Aufgabe mit der entwickelten IMU-basierten Tablet-Anwendung durchgeführt wurde, unterschiedliche Wirkungen verschiedener audio-motorischer Feedbackformen für eine Unterstützung der interpersonellen Koordination in der Kategorie proximaler sensomotorischer Kontingenzen. Diese Dissertation hat einen intensiven interdisziplinären Charakter: Technologische Anforderungen in den Bereichen der Sensortechnologie und der Softwareentwicklung mussten in direktem Bezug auf vordefinierte verhaltenswissenschaftliche Fragestellungen entwickelt und angewendet bzw. gelöst werden—und dies in zwei unterschiedlichen Domänen (distal, proximal). Der gegebene Bezugsrahmen wurde als eine große Herausforderung bei der Entwicklung der beschriebenen Methoden und Settings wahrgenommen. Die vorgeschlagenen IMU-basierten Lösungen könnten dank der weit verbreiteten IMU-basierten mobilen Geräte zukünftig in verschiedene Anwendungen perspektiv reich integriert werden.Today’s mobile communication technologies have increased verbal and text-based communication with other humans, social robots and intelligent virtual assistants. On the other hand, the technologies reduce face-to-face communication. This social issue is critical because decreasing direct interactions may cause difficulty in reading social and environmental cues, thereby impeding the development of overall social skills. Recently, scientists have studied the importance of nonverbal interpersonal activities to social skills, by measuring human behavioral and neurophysiological patterns. These interdisciplinary approaches are in line with the European Union research project, “Socializing sensorimotor contingencies” (socSMCs), which aims to improve the capability of social robots and properly deal with autism spectrum disorder (ASD). Therefore, modelling and benchmarking healthy humans’ social behavior are fundamental to establish a foundation for research on emergence and enhancement of interpersonal coordination. In this research project, two different experimental settings were categorized depending on interactants’ distance: distal and proximal settings, where the structure of engaged cognitive systems changes, and the level of socSMCs differs. As a part of the project, this dissertation work referred to this spatial framework. Additionally, single-sensor solutions were developed to reduce costs and efforts in measuring human behaviors, recognizing the social behaviors, and enhancing interpersonal coordination. First of all, algorithms using a head worn inertial measurement unit (H-IMU) were developed to measure human kinematics, as a baseline for social behaviors. The results confirmed that the H-IMU can measure individual gait parameters by analyzing only head kinematics. Secondly, as a distal sensorimotor contingency, interpersonal relationship was considered with respect to a dynamic structure of three interacting components: positivity, mutual attentiveness, and coordination. The H-IMUs monitored the social behavioral events relying on kinematics of the head orientation and oscillation during walk and talk, which can contribute to estimate the level of rapport. Finally, in a new collaborative task with the proposed IMU-based tablet application, results verified effects of different auditory-motor feedbacks on the enhancement of interpersonal coordination in a proximal setting. This dissertation has an intensive interdisciplinary character: Technological development, in the areas of sensor and software engineering, was required to apply to or solve issues in direct relation to predefined behavioral scientific questions in two different settings (distal and proximal). The given frame served as a reference in the development of the methods and settings in this dissertation. The proposed IMU-based solutions are also promising for various future applications due to widespread wearable devices with IMUs.European Commission/HORIZON2020-FETPROACT-2014/641321/E

    Mirroring to Build Trust in Digital Assistants

    Full text link
    We describe experiments towards building a conversational digital assistant that considers the preferred conversational style of the user. In particular, these experiments are designed to measure whether users prefer and trust an assistant whose conversational style matches their own. To this end we conducted a user study where subjects interacted with a digital assistant that responded in a way that either matched their conversational style, or did not. Using self-reported personality attributes and subjects' feedback on the interactions, we built models that can reliably predict a user's preferred conversational style.Comment: Preprin

    Behavioural attentiveness patterns analysis – detecting distraction behaviours

    Get PDF
    The capacity of remaining focused on a task can be crucial in some circumstances. In general, this ability is intrinsic in a human social interaction and it is naturally used in any social context. Nevertheless, some individuals have difficulties in remaining concentrated in an activity, resulting in a short attention span. Children with Autism Spectrum Disorder (ASD) are a special example of such individuals. ASD is a group of complex developmental disorders of the brain. Individuals affected by this disorder are characterized by repetitive patterns of behaviour, restricted activities or interests, and impairments in social communication. The use of robots has already proved to encourage the developing of social interaction skills lacking in children with ASD. However, most of these systems are controlled remotely and cannot adapt automatically to the situation, and even those who are more autonomous still cannot perceive whether or not the user is paying attention to the instructions and actions of the robot. Following this trend, this dissertation is part of a research project that has been under development for some years. In this project, the Robot ZECA (Zeno Engaging Children with Autism) from Hanson Robotics is used to promote the interaction with children with ASD helping them to recognize emotions, and to acquire new knowledge in order to promote social interaction and communication with the others. The main purpose of this dissertation is to know whether the user is distracted during an activity. In the future, the objective is to interface this system with ZECA to consequently adapt its behaviour taking into account the individual affective state during an emotion imitation activity. In order to recognize human distraction behaviours and capture the user attention, several patterns of distraction, as well as systems to automatically detect them, have been developed. One of the most used distraction patterns detection methods is based on the measurement of the head pose and eye gaze. The present dissertation proposes a system based on a Red Green Blue (RGB) camera, capable of detecting the distraction patterns, head pose, eye gaze, blinks frequency, and the user to position towards the camera, during an activity, and then classify the user's state using a machine learning algorithm. Finally, the proposed system is evaluated in a laboratorial and controlled environment in order to verify if it is capable to detect the patterns of distraction. The results of these preliminary tests allowed to detect some system constraints, as well as to validate its adequacy to later use it in an intervention setting.A capacidade de permanecer focado numa tarefa pode ser crucial em algumas circunstâncias. No geral, essa capacidade é intrínseca numa interação social humana e é naturalmente usada em qualquer contexto social. No entanto, alguns indivíduos têm dificuldades em permanecer concentrados numa atividade, resultando num curto período de atenção. Crianças com Perturbações do Espectro do Autismo (PEA) são um exemplo especial de tais indivíduos. PEA é um grupo de perturbações complexas do desenvolvimento do cérebro. Os indivíduos afetados por estas perturbações são caracterizados por padrões repetitivos de comportamento, atividades ou interesses restritos e deficiências na comunicação social. O uso de robôs já provaram encorajar a promoção da interação social e ajudaram no desenvolvimento de competências deficitárias nas crianças com PEA. No entanto, a maioria desses sistemas é controlada remotamente e não consegue-se adaptar automaticamente à situação, e mesmo aqueles que são mais autônomos ainda não conseguem perceber se o utilizador está ou não atento às instruções e ações do robô. Seguindo esta tendência, esta dissertação é parte de um projeto de pesquisa que vem sendo desenvolvido há alguns anos, onde o robô ZECA (Zeno Envolvendo Crianças com Autismo) da Hanson Robotics é usado para promover a interação com crianças com PEA, ajudando-as a reconhecer emoções, adquirir novos conhecimentos para promover a interação social e comunicação com os pares. O principal objetivo desta dissertação é saber se o utilizador está distraído durante uma atividade. No futuro, o objetivo é fazer a interface deste sistema com o ZECA para, consequentemente, adaptar o seu comportamento tendo em conta o estado afetivo do utilizador durante uma atividade de imitação de emoções. A fim de reconhecer os comportamentos de distração humana e captar a atenção do utilizador, vários padrões de distração, bem como sistemas para detetá-los automaticamente, foram desenvolvidos. Um dos métodos de deteção de padrões de distração mais utilizados baseia-se na medição da orientação da cabeça e da orientação do olhar. A presente dissertação propõe um sistema baseado numa câmera Red Green Blue (RGB), capaz de detetar os padrões de distração, orientação da cabeça, orientação do olhar, frequência do piscar de olhos e a posição do utilizador em frente da câmera, durante uma atividade, e então classificar o estado do utilizador usando um algoritmo de “machine learning”. Por fim, o sistema proposto é avaliado num ambiente laboratorial, a fim de verificar se é capaz de detetar os padrões de distração. Os resultados destes testes preliminares permitiram detetar algumas restrições do sistema, bem como validar a sua adequação para posteriormente utilizá-lo num ambiente de intervenção

    HOW TO INTERACT WITH AR HEAD MOUNTED DEVICES IN CARE WORK? A STUDY COMPARING HANDHELD TOUCH (HANDS-ON) AND GESTURE (HANDS-FREE) INTERACTION

    Get PDF
    In this paper, we describe a study investigating augmented reality (AR) to support caregivers. We implemented a system called Care Lenses that supports various care tasks on AR head-mounted devices. For its application, one question was how caregivers could interact with the system while providing care, that is, while using one or both hands for care tasks. Therefore, we compared two mechanisms to interact with the CareLenses (handheld touch similar to touchpads and touchscreens and head gestures). We found that certain head gestures were difficult to apply in practice, but that except from this head gesture support was as usable and useful as handheld touch interaction, although the study participants were much more familiar with the handheld touch control. We conclude that head gestures can be a good means to enable AR support in care, and we provide design considerations to make them more applicable in practice

    Bridging the gap between emotion and joint action

    Get PDF
    Our daily human life is filled with a myriad of joint action moments, be it children playing, adults working together (i.e., team sports), or strangers navigating through a crowd. Joint action brings individuals (and embodiment of their emotions) together, in space and in time. Yet little is known about how individual emotions propagate through embodied presence in a group, and how joint action changes individual emotion. In fact, the multi-agent component is largely missing from neuroscience-based approaches to emotion, and reversely joint action research has not found a way yet to include emotion as one of the key parameters to model socio-motor interaction. In this review, we first identify the gap and then stockpile evidence showing strong entanglement between emotion and acting together from various branches of sciences. We propose an integrative approach to bridge the gap, highlight five research avenues to do so in behavioral neuroscience and digital sciences, and address some of the key challenges in the area faced by modern societies

    Control of a Movable Robot Head Using Vision-Based Object Tracking

    Get PDF
    This paper presents a visual tracking system to support the movement of the robot head for detecting the existence of objects. Object identification and object position estimation were conducted using image-based processing. The movement of the robot head was in four directions namely  to the right, left, top, and bottom of the robot head. Based on the distance of the object, it shifted the object to many points to assess the accuracy of the process of tracking the object. The targeted objects are detected through several processes, namely normalization of RGB images, thresholding, and object marking. The process of tracking the object conducted by the robot head varied in 40 various object points with high accuracy. The further the object’s distance to the robot, the smaller the corner of the movement of the robot produced compared to the movement of the robot head to track an object that was closer even though with the same distance stimulant shift object. However, for the distance and the shift of the same object, the level of accuracy showed almost the same results. The results showed the movement of the robot head to track the object under the head of the robot produced the movement with a larger angular error compared to the movement of the robot head in another direction even though with the stimulant distance of the same object position and the distance shift of the same object
    • …
    corecore