157 research outputs found

    A Comprehensive Review of Data-Driven Co-Speech Gesture Generation

    Full text link
    Gestures that accompany speech are an essential part of natural and efficient embodied human communication. The automatic generation of such co-speech gestures is a long-standing problem in computer animation and is considered an enabling technology in film, games, virtual social spaces, and for interaction with social robots. The problem is made challenging by the idiosyncratic and non-periodic nature of human co-speech gesture motion, and by the great diversity of communicative functions that gestures encompass. Gesture generation has seen surging interest recently, owing to the emergence of more and larger datasets of human gesture motion, combined with strides in deep-learning-based generative models, that benefit from the growing availability of data. This review article summarizes co-speech gesture generation research, with a particular focus on deep generative models. First, we articulate the theory describing human gesticulation and how it complements speech. Next, we briefly discuss rule-based and classical statistical gesture synthesis, before delving into deep learning approaches. We employ the choice of input modalities as an organizing principle, examining systems that generate gestures from audio, text, and non-linguistic input. We also chronicle the evolution of the related training data sets in terms of size, diversity, motion quality, and collection method. Finally, we identify key research challenges in gesture generation, including data availability and quality; producing human-like motion; grounding the gesture in the co-occurring speech in interaction with other speakers, and in the environment; performing gesture evaluation; and integration of gesture synthesis into applications. We highlight recent approaches to tackling the various key challenges, as well as the limitations of these approaches, and point toward areas of future development.Comment: Accepted for EUROGRAPHICS 202

    Affective reactions towards socially interactive agents and their computational modeling

    Get PDF
    Over the past 30 years, researchers have studied human reactions towards machines applying the Computers Are Social Actors paradigm, which contrasts reactions towards computers with reactions towards humans. The last 30 years have also seen improvements in technology that have led to tremendous changes in computer interfaces and the development of Socially Interactive Agents. This raises the question of how humans react to Socially Interactive Agents. To answer these questions, knowledge from several disciplines is required, which is why this interdisciplinary dissertation is positioned within psychology and computer science. It aims to investigate affective reactions to Socially Interactive Agents and how these can be modeled computationally. Therefore, after a general introduction and background, this thesis first provides an overview of the Socially Interactive Agent system used in this work. Second, it presents a study comparing a human and a virtual job interviewer, which shows that both interviewers induce shame in participants to the same extent. Thirdly, it reports on a study investigating obedience towards Socially Interactive Agents. The results indicate that participants obey human and virtual instructors in similar ways. Furthermore, both types of instructors evoke feelings of stress and shame to the same extent. Fourth, a stress management training using biofeedback with a Socially Interactive Agent is presented. The study shows that a virtual trainer can teach coping techniques for emotionally challenging social situations. Fifth, it introduces MARSSI, a computational model of user affect. The evaluation of the model shows that it is possible to relate sequences of social signals to affective reactions, taking into account emotion regulation processes. Finally, the Deep method is proposed as a starting point for deeper computational modeling of internal emotions. The method combines social signals, verbalized introspection information, context information, and theory-driven knowledge. An exemplary application to the emotion shame and a schematic dynamic Bayesian network for its modeling are illustrated. Overall, this thesis provides evidence that human reactions towards Socially Interactive Agents are very similar to those towards humans, and that it is possible to model these reactions computationally.In den letzten 30 Jahren haben Forschende menschliche Reaktionen auf Maschinen untersucht und dabei das “Computer sind soziale Akteure”-Paradigma genutzt, in dem Reaktionen auf Computer mit denen auf Menschen verglichen werden. In den letzten 30 Jahren hat sich ebenfalls die Technologie weiterentwickelt, was zu einer enormen Veränderung der Computerschnittstellen und der Entwicklung von sozial interaktiven Agenten geführt hat. Dies wirft Fragen zu menschlichen Reaktionen auf sozial interaktive Agenten auf. Um diese Fragen zu beantworten, ist Wissen aus mehreren Disziplinen erforderlich, weshalb diese interdisziplinäre Dissertation innerhalb der Psychologie und Informatik angesiedelt ist. Sie zielt darauf ab, affektive Reaktionen auf sozial interaktive Agenten zu untersuchen und zu erforschen, wie diese computational modelliert werden können. Nach einer allgemeinen Einführung in das Thema gibt diese Arbeit daher, erstens, einen Überblick über das Agentensystem, das in der Arbeit verwendet wird. Zweitens wird eine Studie vorgestellt, in der eine menschliche und eine virtuelle Jobinterviewerin miteinander verglichen werden, wobei sich zeigt, dass beide Interviewerinnen bei den Versuchsteilnehmenden Schamgefühle in gleichem Maße auslösen. Drittens wird eine Studie berichtet, in der Gehorsam gegenüber sozial interaktiven Agenten untersucht wird. Die Ergebnisse deuten darauf hin, dass Versuchsteilnehmende sowohl menschlichen als auch virtuellen Anleiterinnen ähnlich gehorchen. Darüber hinaus werden durch beide Instruktorinnen gleiche Maße von Stress und Scham hervorgerufen. Viertens wird ein Biofeedback-Stressmanagementtraining mit einer sozial interaktiven Agentin vorgestellt. Die Studie zeigt, dass die virtuelle Trainerin Techniken zur Bewältigung von emotional herausfordernden sozialen Situationen vermitteln kann. Fünftens wird MARSSI, ein computergestütztes Modell des Nutzeraffekts, vorgestellt. Die Evaluation des Modells zeigt, dass es möglich ist, Sequenzen von sozialen Signalen mit affektiven Reaktionen unter Berücksichtigung von Emotionsregulationsprozessen in Beziehung zu setzen. Als letztes wird die Deep-Methode als Ausgangspunkt für eine tiefer gehende computergestützte Modellierung von internen Emotionen vorgestellt. Die Methode kombiniert soziale Signale, verbalisierte Introspektion, Kontextinformationen und theoriegeleitetes Wissen. Eine beispielhafte Anwendung auf die Emotion Scham und ein schematisches dynamisches Bayes’sches Netz zu deren Modellierung werden dargestellt. Insgesamt liefert diese Arbeit Hinweise darauf, dass menschliche Reaktionen auf sozial interaktive Agenten den Reaktionen auf Menschen sehr ähnlich sind und dass es möglich ist diese menschlichen Reaktion computational zu modellieren.Deutsche Forschungsgesellschaf

    Towards Video Transformers for Automatic Human Analysis

    Full text link
    [eng] With the aim of creating artificial systems capable of mirroring the nuanced understanding and interpretative powers inherent to human cognition, this thesis embarks on an exploration of the intersection between human analysis and Video Transformers. The objective is to harness the potential of Transformers, a promising architectural paradigm, to comprehend the intricacies of human interaction, thus paving the way for the development of empathetic and context-aware intelligent systems. In order to do so, we explore the whole Computer Vision pipeline, from data gathering, to deeply analyzing recent developments, through model design and experimentation. Central to this study is the creation of UDIVA, an expansive multi-modal, multi-view dataset capturing dyadic face-to-face human interactions. Comprising 147 participants across 188 sessions, UDIVA integrates audio-visual recordings, heart-rate measurements, personality assessments, socio- demographic metadata, and conversational transcripts, establishing itself as the largest dataset for dyadic human interaction analysis up to this date. This dataset provides a rich context for probing the capabilities of Transformers within complex environments. In order to validate its utility, as well as to elucidate Transformers' ability to assimilate diverse contextual cues, we focus on addressing the challenge of personality regression within interaction scenarios. We first adapt an existing Video Transformer to handle multiple contextual sources and conduct rigorous experimentation. We empirically observe a progressive enhancement in model performance as more context is added, reinforcing the potential of Transformers to decode intricate human dynamics. Building upon these findings, the Dyadformer emerges as a novel architecture, adept at long-range modeling of dyadic interactions. By jointly modeling both participants in the interaction, as well as embedding multi- modal integration into the model itself, the Dyadformer surpasses the baseline and other concurrent approaches, underscoring Transformers' aptitude in deciphering multifaceted, noisy, and challenging tasks such as the analysis of human personality in interaction. Nonetheless, these experiments unveil the ubiquitous challenges when training Transformers, particularly in managing overfitting due to their demand for extensive datasets. Consequently, we conclude this thesis with a comprehensive investigation into Video Transformers, analyzing topics ranging from architectural designs and training strategies, to input embedding and tokenization, traversing through multi-modality and specific applications. Across these, we highlight trends which optimally harness spatio-temporal representations that handle video redundancy and high dimensionality. A culminating performance comparison is conducted in the realm of video action classification, spotlighting strategies that exhibit superior efficacy, even compared to traditional CNN-based methods.[cat] Aquesta tesi busca crear sistemes artificials que reflecteixin les habilitats de comprensió i interpretació humanes a través de l'ús de Transformers per a vídeo. L'objectiu és utilitzar aquestes arquitectures per comprendre millor la interacció humana i desenvolupar sistemes intel·ligents i conscients de l'entorn. Això implica explorar àmplies àrees de la Visió per Computador, des de la recopilació de dades fins a l'anàlisi de l'estat de l'art i la prova experimental d'aquests models. Una part essencial d'aquest estudi és la creació d'UDIVA, un ampli conjunt de dades multimodal i multivista que enregistra interaccions humanes cara a cara. Amb 147 participants i 188 sessions, UDIVA inclou contingut audiovisual, freqüència cardíaca, perfils de personalitat, dades sociodemogràfiques i transcripcions de les converses. És el conjunt de dades més gran conegut per a l'anàlisi de la interacció humana diàdica i proporciona un context ric per a l'estudi de les capacitats dels Transformers en entorns complexos. Per tal de validar la seva utilitat i les habilitats dels Transformers, ens centrem en la regressió de la personalitat. Inicialment, adaptem un Transformer de vídeo per integrar diverses fonts de context. Mitjançant experiments exhaustius, observem millores progressives en els resultats amb la inclusió de més context, confirmant la capacitat dels Transformers. Motivats per aquests resultats, desenvolupem el Dyadformer, una arquitectura per interaccions diàdiques de llarga duració. Aquesta nova arquitectura considera simultàniament els dos participants en la interacció i incorpora la multimodalitat en un sol model. El Dyadformer supera la nostra proposta inicial i altres treballs similars, destacant la capacitat dels Transformers per abordar tasques complexes. No obstant això, aquestos experiments revelen reptes d'entrenament dels Transformers, com el sobreajustament, per la seva necessitat de grans conjunts de dades. La tesi conclou amb una anàlisi profunda dels Transformers per a vídeo, incloent dissenys arquitectònics, estratègies d'entrenament, preprocessament de vídeos, tokenització i multimodalitat. S'identifiquen tendències per gestionar la redundància i alta dimensionalitat de vídeos i es realitza una comparació de rendiment en la classificació d'accions a vídeo, destacant estratègies d'eficàcia superior als mètodes tradicionals basats en convolucions

    ProDial – an annotated proactive dialogue act corpus for conversational assistants using crowdsourcing

    Get PDF
    Proactive behaviour is an integral interaction concept of both human-human as well as human-computer cooperation. However, modelling proactive systems and appropriate interaction strategies are still an open quest. In this work, a parameterised and annotated dialogue corpus has been created. The corpus is based on human interactions with an autonomous agent embedded in a serious game setting. For modelling proactive dialogue behaviour, the agent was capable of selecting from four different proactive actions (None, Notification, Suggestion, Intervention) in order to serve as the user’s personal advisor in a sequential planning task. Data was collected online using crowdsourcing (308 participants) resulting in a total of 3696 system-user exchanges. Data was annotated with objective features as well as subjectively self-reported features for capturing the interplay between proactive behaviour and situational as well as user-dependent characteristics. The corpus is intended for building a user model for developing trustworthy proactive interaction strategies

    Internet and Biometric Web Based Business Management Decision Support

    Get PDF
    Internet and Biometric Web Based Business Management Decision Support MICROBE MOOC material prepared under IO1/A5 Development of the MICROBE personalized MOOCs content and teaching materials Prepared by: A. Kaklauskas, A. Banaitis, I. Ubarte Vilnius Gediminas Technical University, Lithuania Project No: 2020-1-LT01-KA203-07810

    Real-time generation and adaptation of social companion robot behaviors

    Get PDF
    Social robots will be part of our future homes. They will assist us in everyday tasks, entertain us, and provide helpful advice. However, the technology still faces challenges that must be overcome to equip the machine with social competencies and make it a socially intelligent and accepted housemate. An essential skill of every social robot is verbal and non-verbal communication. In contrast to voice assistants, smartphones, and smart home technology, which are already part of many people's lives today, social robots have an embodiment that raises expectations towards the machine. Their anthropomorphic or zoomorphic appearance suggests they can communicate naturally with speech, gestures, or facial expressions and understand corresponding human behaviors. In addition, robots also need to consider individual users' preferences: everybody is shaped by their culture, social norms, and life experiences, resulting in different expectations towards communication with a robot. However, robots do not have human intuition - they must be equipped with the corresponding algorithmic solutions to these problems. This thesis investigates the use of reinforcement learning to adapt the robot's verbal and non-verbal communication to the user's needs and preferences. Such non-functional adaptation of the robot's behaviors primarily aims to improve the user experience and the robot's perceived social intelligence. The literature has not yet provided a holistic view of the overall challenge: real-time adaptation requires control over the robot's multimodal behavior generation, an understanding of human feedback, and an algorithmic basis for machine learning. Thus, this thesis develops a conceptual framework for designing real-time non-functional social robot behavior adaptation with reinforcement learning. It provides a higher-level view from the system designer's perspective and guidance from the start to the end. It illustrates the process of modeling, simulating, and evaluating such adaptation processes. Specifically, it guides the integration of human feedback and social signals to equip the machine with social awareness. The conceptual framework is put into practice for several use cases, resulting in technical proofs of concept and research prototypes. They are evaluated in the lab and in in-situ studies. These approaches address typical activities in domestic environments, focussing on the robot's expression of personality, persona, politeness, and humor. Within this scope, the robot adapts its spoken utterances, prosody, and animations based on human explicit or implicit feedback.Soziale Roboter werden Teil unseres zukünftigen Zuhauses sein. Sie werden uns bei alltäglichen Aufgaben unterstützen, uns unterhalten und uns mit hilfreichen Ratschlägen versorgen. Noch gibt es allerdings technische Herausforderungen, die zunächst überwunden werden müssen, um die Maschine mit sozialen Kompetenzen auszustatten und zu einem sozial intelligenten und akzeptierten Mitbewohner zu machen. Eine wesentliche Fähigkeit eines jeden sozialen Roboters ist die verbale und nonverbale Kommunikation. Im Gegensatz zu Sprachassistenten, Smartphones und Smart-Home-Technologien, die bereits heute Teil des Lebens vieler Menschen sind, haben soziale Roboter eine Verkörperung, die Erwartungen an die Maschine weckt. Ihr anthropomorphes oder zoomorphes Aussehen legt nahe, dass sie in der Lage sind, auf natürliche Weise mit Sprache, Gestik oder Mimik zu kommunizieren, aber auch entsprechende menschliche Kommunikation zu verstehen. Darüber hinaus müssen Roboter auch die individuellen Vorlieben der Benutzer berücksichtigen. So ist jeder Mensch von seiner Kultur, sozialen Normen und eigenen Lebenserfahrungen geprägt, was zu unterschiedlichen Erwartungen an die Kommunikation mit einem Roboter führt. Roboter haben jedoch keine menschliche Intuition - sie müssen mit entsprechenden Algorithmen für diese Probleme ausgestattet werden. In dieser Arbeit wird der Einsatz von bestärkendem Lernen untersucht, um die verbale und nonverbale Kommunikation des Roboters an die Bedürfnisse und Vorlieben des Benutzers anzupassen. Eine solche nicht-funktionale Anpassung des Roboterverhaltens zielt in erster Linie darauf ab, das Benutzererlebnis und die wahrgenommene soziale Intelligenz des Roboters zu verbessern. Die Literatur bietet bisher keine ganzheitliche Sicht auf diese Herausforderung: Echtzeitanpassung erfordert die Kontrolle über die multimodale Verhaltenserzeugung des Roboters, ein Verständnis des menschlichen Feedbacks und eine algorithmische Basis für maschinelles Lernen. Daher wird in dieser Arbeit ein konzeptioneller Rahmen für die Gestaltung von nicht-funktionaler Anpassung der Kommunikation sozialer Roboter mit bestärkendem Lernen entwickelt. Er bietet eine übergeordnete Sichtweise aus der Perspektive des Systemdesigners und eine Anleitung vom Anfang bis zum Ende. Er veranschaulicht den Prozess der Modellierung, Simulation und Evaluierung solcher Anpassungsprozesse. Insbesondere wird auf die Integration von menschlichem Feedback und sozialen Signalen eingegangen, um die Maschine mit sozialem Bewusstsein auszustatten. Der konzeptionelle Rahmen wird für mehrere Anwendungsfälle in die Praxis umgesetzt, was zu technischen Konzeptnachweisen und Forschungsprototypen führt, die in Labor- und In-situ-Studien evaluiert werden. Diese Ansätze befassen sich mit typischen Aktivitäten in häuslichen Umgebungen, wobei der Schwerpunkt auf dem Ausdruck der Persönlichkeit, dem Persona, der Höflichkeit und dem Humor des Roboters liegt. In diesem Rahmen passt der Roboter seine Sprache, Prosodie, und Animationen auf Basis expliziten oder impliziten menschlichen Feedbacks an

    Analytics and Intuition in the Process of Selecting Talent

    Get PDF
    In management, decisions are expected to be based on rational analytics rather than intuition. But intuition, as a human evolutionary achievement, offers wisdom that, despite all the advances in rational analytics and AI, should be used constructively when recruiting and winning personnel. Integrating these inner experiential competencies with rational-analytical procedures leads to smart recruiting decisions

    Building Embodied Conversational Agents:Observations on human nonverbal behaviour as a resource for the development of artificial characters

    Get PDF
    "Wow this is so cool!" This is what I most probably yelled, back in the 90s, when my first computer program on our MSX computer turned out to do exactly what I wanted it to do. The program contained the following instruction: COLOR 10(1.1) After hitting enter, it would change the screen color from light blue to dark yellow. A few years after that experience, Microsoft Windows was introduced. Windows came with an intuitive graphical user interface that was designed to allow all people, so also those who would not consider themselves to be experienced computer addicts, to interact with the computer. This was a major step forward in human-computer interaction, as from that point forward no complex programming skills were required anymore to perform such actions as adapting the screen color. Changing the background was just a matter of pointing the mouse to the desired color on a color palette. "Wow this is so cool!". This is what I shouted, again, 20 years later. This time my new smartphone successfully skipped to the next song on Spotify because I literally told my smartphone, with my voice, to do so. Being able to operate your smartphone with natural language through voice-control can be extremely handy, for instance when listening to music while showering. Again, the option to handle a computer with voice instructions turned out to be a significant optimization in human-computer interaction. From now on, computers could be instructed without the use of a screen, mouse or keyboard, and instead could operate successfully simply by telling the machine what to do. In other words, I have personally witnessed how, within only a few decades, the way people interact with computers has changed drastically, starting as a rather technical and abstract enterprise to becoming something that was both natural and intuitive, and did not require any advanced computer background. Accordingly, while computers used to be machines that could only be operated by technically-oriented individuals, they had gradually changed into devices that are part of many people’s household, just as much as a television, a vacuum cleaner or a microwave oven. The introduction of voice control is a significant feature of the newer generation of interfaces in the sense that these have become more "antropomorphic" and try to mimic the way people interact in daily life, where indeed the voice is a universally used device that humans exploit in their exchanges with others. The question then arises whether it would be possible to go even one step further, where people, like in science-fiction movies, interact with avatars or humanoid robots, whereby users can have a proper conversation with a computer-simulated human that is indistinguishable from a real human. An interaction with a human-like representation of a computer that behaves, talks and reacts like a real person would imply that the computer is able to not only produce and understand messages transmitted auditorily through the voice, but also could rely on the perception and generation of different forms of body language, such as facial expressions, gestures or body posture. At the time of writing, developments of this next step in human-computer interaction are in full swing, but the type of such interactions is still rather constrained when compared to the way humans have their exchanges with other humans. It is interesting to reflect on how such future humanmachine interactions may look like. When we consider other products that have been created in history, it sometimes is striking to see that some of these have been inspired by things that can be observed in our environment, yet at the same do not have to be exact copies of those phenomena. For instance, an airplane has wings just as birds, yet the wings of an airplane do not make those typical movements a bird would produce to fly. Moreover, an airplane has wheels, whereas a bird has legs. At the same time, an airplane has made it possible for a humans to cover long distances in a fast and smooth manner in a way that was unthinkable before it was invented. The example of the airplane shows how new technologies can have "unnatural" properties, but can nonetheless be very beneficial and impactful for human beings. This dissertation centers on this practical question of how virtual humans can be programmed to act more human-like. The four studies presented in this dissertation all have the equivalent underlying question of how parts of human behavior can be captured, such that computers can use it to become more human-like. Each study differs in method, perspective and specific questions, but they are all aimed to gain insights and directions that would help further push the computer developments of human-like behavior and investigate (the simulation of) human conversational behavior. The rest of this introductory chapter gives a general overview of virtual humans (also known as embodied conversational agents), their potential uses and the engineering challenges, followed by an overview of the four studies

    Building Embodied Conversational Agents:Observations on human nonverbal behaviour as a resource for the development of artificial characters

    Get PDF
    "Wow this is so cool!" This is what I most probably yelled, back in the 90s, when my first computer program on our MSX computer turned out to do exactly what I wanted it to do. The program contained the following instruction: COLOR 10(1.1) After hitting enter, it would change the screen color from light blue to dark yellow. A few years after that experience, Microsoft Windows was introduced. Windows came with an intuitive graphical user interface that was designed to allow all people, so also those who would not consider themselves to be experienced computer addicts, to interact with the computer. This was a major step forward in human-computer interaction, as from that point forward no complex programming skills were required anymore to perform such actions as adapting the screen color. Changing the background was just a matter of pointing the mouse to the desired color on a color palette. "Wow this is so cool!". This is what I shouted, again, 20 years later. This time my new smartphone successfully skipped to the next song on Spotify because I literally told my smartphone, with my voice, to do so. Being able to operate your smartphone with natural language through voice-control can be extremely handy, for instance when listening to music while showering. Again, the option to handle a computer with voice instructions turned out to be a significant optimization in human-computer interaction. From now on, computers could be instructed without the use of a screen, mouse or keyboard, and instead could operate successfully simply by telling the machine what to do. In other words, I have personally witnessed how, within only a few decades, the way people interact with computers has changed drastically, starting as a rather technical and abstract enterprise to becoming something that was both natural and intuitive, and did not require any advanced computer background. Accordingly, while computers used to be machines that could only be operated by technically-oriented individuals, they had gradually changed into devices that are part of many people’s household, just as much as a television, a vacuum cleaner or a microwave oven. The introduction of voice control is a significant feature of the newer generation of interfaces in the sense that these have become more "antropomorphic" and try to mimic the way people interact in daily life, where indeed the voice is a universally used device that humans exploit in their exchanges with others. The question then arises whether it would be possible to go even one step further, where people, like in science-fiction movies, interact with avatars or humanoid robots, whereby users can have a proper conversation with a computer-simulated human that is indistinguishable from a real human. An interaction with a human-like representation of a computer that behaves, talks and reacts like a real person would imply that the computer is able to not only produce and understand messages transmitted auditorily through the voice, but also could rely on the perception and generation of different forms of body language, such as facial expressions, gestures or body posture. At the time of writing, developments of this next step in human-computer interaction are in full swing, but the type of such interactions is still rather constrained when compared to the way humans have their exchanges with other humans. It is interesting to reflect on how such future humanmachine interactions may look like. When we consider other products that have been created in history, it sometimes is striking to see that some of these have been inspired by things that can be observed in our environment, yet at the same do not have to be exact copies of those phenomena. For instance, an airplane has wings just as birds, yet the wings of an airplane do not make those typical movements a bird would produce to fly. Moreover, an airplane has wheels, whereas a bird has legs. At the same time, an airplane has made it possible for a humans to cover long distances in a fast and smooth manner in a way that was unthinkable before it was invented. The example of the airplane shows how new technologies can have "unnatural" properties, but can nonetheless be very beneficial and impactful for human beings. This dissertation centers on this practical question of how virtual humans can be programmed to act more human-like. The four studies presented in this dissertation all have the equivalent underlying question of how parts of human behavior can be captured, such that computers can use it to become more human-like. Each study differs in method, perspective and specific questions, but they are all aimed to gain insights and directions that would help further push the computer developments of human-like behavior and investigate (the simulation of) human conversational behavior. The rest of this introductory chapter gives a general overview of virtual humans (also known as embodied conversational agents), their potential uses and the engineering challenges, followed by an overview of the four studies

    Social robots as communication partners to support emotional well-being

    Get PDF
    Interpersonal communication behaviors play a significant role in maintaining emotional well being. Self-disclosure is one such behavior that can have a meaningful impact on our emotional state. When we engage in self-disclosure, we can receive and provide support, improve our mood, and regulate our emotions. It also creates a comfortable space to share our feelings and emotions, which can have a positive impact on our overall mental and physical health. Social robots are gradually being introduced in a range of social and health settings. These autonomous machines can take on various forms and shapes and interact with humans using social behaviors and rules. They are being studied and introduced in psychosocial health interventions, including mental health and rehabilitation settings, to provide much- needed physical and social support to individuals. In my doctoral thesis, I aimed to explore how humans self-disclose and express their emotions to social robots and how this behavior can affect our perception of these agents. By studying speech-based communication interactions between humans and social robots, I wanted to investigate how social robots can support human emotional well-being. While social robots show great promise in offering social support, there are still many questions to consider before deploying them in actual care contexts. It is important to carefully evaluate their utility and scope in interpersonal communication settings, especially since social robots do not yet offer the same opportunities as humans for social interactions. My dissertation consists of three empirical chapters that investigate the underlying psychological mechanisms of perception and behaviour within human–robot communication and their potential deployment as interventions for emotional wellbeing. Chapter 1 offers a comprehensive introduction to the topic of emotional well-being and self-disclosure from a psychological perspective. I begin by providing an overview of the existing literature and theory in this field. Next, I delve into the social perception of social robots, presenting a theoretical framework to help readers understand how people view these machines. To illustrate this, I review some of the latest studies on social robots in care settings, as well as those exploring how robots can encourage people to self-disclose more about themselves. Finally, I explore the key concepts of self disclosure, including how it is defined, operationalized, and measured in experimental psychology and human–robot interaction research. In my first empirical chapter, Chapter 2, I explore how a social robot’s embodiment influences people’s disclosures in measurable terms, and how these disclosures differ from disclosures made to humans and disembodied agents. Chapter 3 studies how prolonged and intensive long-term interactions with a social robot affect people’s self-disclosure behavior towards the robot, perceptions of the robot, and how it affected factors related to well-being. Additionally, I examine the role of the interaction’s discussion theme. In Chapter 4, the final empirical chapter, I test a long-term and intensive social robot intervention with informal caregivers, people living with considerably difficult life situations. I investigate the potential of employing a social robot for eliciting self-disclosure among informal caregivers over time, supporting their emotional well-being, and implicitly encouraging them to adapt emotion regulation skills. In the final discussion chapter, Chapter 5, I summarise the current findings and discuss the contributions, implications and limitations of my work. I reflect on the contribution and challenges of this research approach and provide some future directions for researchers in the relevant fields. The results of these studies provide meaningful evidence for user experience, acceptance, and trust of social robots in different settings, including care, and demonstrate the unique psychological nature of these dynamic social interactions with social robots. Overall, this thesis contributes to the development of social robots that can support emotional well-being through self-disclosure interactions and provide insights into how social robots can be used as mental health interventions for individuals coping with emotional distress
    corecore