10,230 research outputs found

    The Effect of Social Chatbot Avatar Presentation on User Self-disclosure

    Get PDF
    The emergence of artificial intelligence has boosted the development and utilization of chatbots that can satisfy both users\u27 task-oriented needs, such as information search for purchase, and their social needs, such as self-disclosure for rapport-building. While much research has focused on its usage in the commercial context, little effort has been paid to examine social chatbots for psychotherapy, where facilitating relationship formation is crucial in chatbot design. Inspired by prevalent chatbot applications and drawing on the literature on visual cues and self-disclosure, this paper aims to 1) explore the effects of different presentations of social chatbot avatars (text, profile, and background) on users\u27 self-disclosure, along with the mediating role of self-awareness, and 2) understand the moderating role of chatbot gaze directions (direct gaze and averted gaze). The proposed studies will theoretically contribute to literature regarding human-robot interaction. Research findings will also provide substantial practical implications for chatbot design

    Tell me, what are you most afraid of? Exploring the Effects of Agent Representation on Information Disclosure in Human-Chatbot Interaction

    Full text link
    Self-disclosure counts as a key factor influencing successful health treatment, particularly when it comes to building a functioning patient-therapist-connection. To this end, the use of chatbots may be considered a promising puzzle piece that helps foster respective information provision. Several studies have shown that people disclose more information when they are interacting with a chatbot than when they are interacting with another human being. If and how the chatbot is embodied, however, seems to play an important role influencing the extent to which information is disclosed. Here, research shows that people disclose less if the chatbot is embodied with a human avatar in comparison to a chatbot without embodiment. Still, there is only little information available as to whether it is the embodiment with a human face that inhibits disclosure, or whether any type of face will reduce the amount of shared information. The study presented in this paper thus aims to investigate how the type of chatbot embodiment influences self-disclosure in human-chatbot-interaction. We conducted a quasi-experimental study in which n=178n=178 participants were asked to interact with one of three settings of a chatbot app. In each setting, the humanness of the chatbot embodiment was different (i.e., human vs. robot vs. disembodied). A subsequent discourse analysis explored difference in the breadth and depth of self-disclosure. Results show that non-human embodiment seems to have little effect on self-disclosure. Yet, our data also shows, that, contradicting to previous work, human embodiment may have a positive effect on the breadth and depth of self-disclosure.Comment: 13 page

    Theory of Robot Communication: II. Befriending a Robot over Time

    Full text link
    In building on theories of Computer-Mediated Communication (CMC), Human-Robot Interaction, and Media Psychology (i.e. Theory of Affective Bonding), the current paper proposes an explanation of how over time, people experience the mediated or simulated aspects of the interaction with a social robot. In two simultaneously running loops, a more reflective process is balanced with a more affective process. If human interference is detected behind the machine, Robot-Mediated Communication commences, which basically follows CMC assumptions; if human interference remains undetected, Human-Robot Communication comes into play, holding the robot for an autonomous social actor. The more emotionally aroused a robot user is, the more likely they develop an affective relationship with what actually is a machine. The main contribution of this paper is an integration of Computer-Mediated Communication, Human-Robot Communication, and Media Psychology, outlining a full-blown theory of robot communication connected to friendship formation, accounting for communicative features, modes of processing, as well as psychophysiology.Comment: Hoorn, J. F. (2018). Theory of robot communication: II. Befriending a robot over time. arXiv:cs, 2502572(v1), 1-2

    Social robots as communication partners to support emotional well-being

    Get PDF
    Interpersonal communication behaviors play a significant role in maintaining emotional well being. Self-disclosure is one such behavior that can have a meaningful impact on our emotional state. When we engage in self-disclosure, we can receive and provide support, improve our mood, and regulate our emotions. It also creates a comfortable space to share our feelings and emotions, which can have a positive impact on our overall mental and physical health. Social robots are gradually being introduced in a range of social and health settings. These autonomous machines can take on various forms and shapes and interact with humans using social behaviors and rules. They are being studied and introduced in psychosocial health interventions, including mental health and rehabilitation settings, to provide much- needed physical and social support to individuals. In my doctoral thesis, I aimed to explore how humans self-disclose and express their emotions to social robots and how this behavior can affect our perception of these agents. By studying speech-based communication interactions between humans and social robots, I wanted to investigate how social robots can support human emotional well-being. While social robots show great promise in offering social support, there are still many questions to consider before deploying them in actual care contexts. It is important to carefully evaluate their utility and scope in interpersonal communication settings, especially since social robots do not yet offer the same opportunities as humans for social interactions. My dissertation consists of three empirical chapters that investigate the underlying psychological mechanisms of perception and behaviour within human–robot communication and their potential deployment as interventions for emotional wellbeing. Chapter 1 offers a comprehensive introduction to the topic of emotional well-being and self-disclosure from a psychological perspective. I begin by providing an overview of the existing literature and theory in this field. Next, I delve into the social perception of social robots, presenting a theoretical framework to help readers understand how people view these machines. To illustrate this, I review some of the latest studies on social robots in care settings, as well as those exploring how robots can encourage people to self-disclose more about themselves. Finally, I explore the key concepts of self disclosure, including how it is defined, operationalized, and measured in experimental psychology and human–robot interaction research. In my first empirical chapter, Chapter 2, I explore how a social robot’s embodiment influences people’s disclosures in measurable terms, and how these disclosures differ from disclosures made to humans and disembodied agents. Chapter 3 studies how prolonged and intensive long-term interactions with a social robot affect people’s self-disclosure behavior towards the robot, perceptions of the robot, and how it affected factors related to well-being. Additionally, I examine the role of the interaction’s discussion theme. In Chapter 4, the final empirical chapter, I test a long-term and intensive social robot intervention with informal caregivers, people living with considerably difficult life situations. I investigate the potential of employing a social robot for eliciting self-disclosure among informal caregivers over time, supporting their emotional well-being, and implicitly encouraging them to adapt emotion regulation skills. In the final discussion chapter, Chapter 5, I summarise the current findings and discuss the contributions, implications and limitations of my work. I reflect on the contribution and challenges of this research approach and provide some future directions for researchers in the relevant fields. The results of these studies provide meaningful evidence for user experience, acceptance, and trust of social robots in different settings, including care, and demonstrate the unique psychological nature of these dynamic social interactions with social robots. Overall, this thesis contributes to the development of social robots that can support emotional well-being through self-disclosure interactions and provide insights into how social robots can be used as mental health interventions for individuals coping with emotional distress

    Strange Loops: Apparent versus Actual Human Involvement in Automated Decision-Making

    Get PDF
    The era of AI-based decision-making fast approaches, and anxiety is mounting about when, and why, we should keep “humans in the loop” (“HITL”). Thus far, commentary has focused primarily on two questions: whether, and when, keeping humans involved will improve the results of decision-making (making them safer or more accurate), and whether, and when, non-accuracy-related values—legitimacy, dignity, and so forth—are vindicated by the inclusion of humans in decision-making. Here, we take up a related but distinct question, which has eluded the scholarship thus far: does it matter if humans appear to be in the loop of decision-making, independent from whether they actually are? In other words, what is stake in the disjunction between whether humans in fact have ultimate authority over decision-making versus whether humans merely seem, from the outside, to have such authority? Our argument proceeds in four parts. First, we build our formal model, enriching the HITL question to include not only whether humans are actually in the loop of decision-making, but also whether they appear to be so. Second, we describe situations in which the actuality and appearance of HITL align: those that seem to involve human judgment and actually do, and those that seem automated and actually are. Third, we explore instances of misalignment: situations in which systems that seem to involve human judgment actually do not, and situations in which systems that hold themselves out as automated actually rely on humans operating “behind the curtain.” Fourth, we examine the normative issues that result from HITL misalignment, arguing that it challenges individual decision-making about automated systems and complicates collective governance of automation

    Would You Trust a (Faulty) Robot? : Effects of Error, Task Type and Personality on Human-Robot Cooperation and Trust

    Get PDF
    How do mistakes made by a robot affect its trustworthiness and acceptance in human-robot collaboration? We investigate how the perception of erroneous robot behavior may influence human interaction choices and the willingness to cooperate with the robot by following a number of its unusual requests. For this purpose, we conducted an experiment in which participants interacted with a home companion robot in one of two experimental conditions: (1) the correct mode or (2) the faulty mode. Our findings reveal that, while significantly affecting subjective perceptions of the robot and assessments of its reliability and trustworthiness, the robot's performance does not seem to substantially influence participants' decisions to (not) comply with its requests. However, our results further suggest that the nature of the task requested by the robot, e.g. whether its effects are revocable as opposed to irrevocable, has a signicant im- pact on participants' willingness to follow its instructions

    The impact of peoples' personal dispositions and personalities on their trust of robots in an emergency scenario

    Get PDF
    Humans should be able to trust that they can safely interact with their home companion robot. However, robots can exhibit occasional mechanical, programming or functional errors. We hypothesise that the severity of the consequences and the timing of a robot's different types of erroneous behaviours during an interaction may have different impacts on users' attitudes towards a domestic robot. First, we investigated human users' perceptions of the severity of various categories of potential errors that are likely to be exhibited by a domestic robot. Second, we used an interactive storyboard to evaluate participants' degree of trust in the robot after it performed tasks either correctly, or with 'small' or 'big' errors. Finally, we analysed the correlation between participants' responses regarding their personality, predisposition to trust other humans, their perceptions of robots, and their interaction with the robot. We conclude that there is correlation between the magnitude of an error performed by a robot and the corresponding loss of trust by the human towards the robot. Moreover we observed that some traits of participants' personalities (conscientiousness and agreeableness) and their disposition of trusting other humans (benevolence) significantly increased their tendency to trust a robot more during an emergency scenario.Peer reviewe

    Cyborgs as Frontline Service Employees: A Research Agenda

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Purpose This paper identifies and explores potential applications of cyborgian technologies within service contexts and how service providers may leverage the integration of cyborgian service actors into their service proposition. In doing so, the paper proposes a new category of ‘melded’ frontline service employees (FLEs), where advanced technologies become embodied within human actors. The paper presents potential opportunities and challenges that may arise through cyborg technological advancements and proposes a future research agenda related to these. Design/methodology This study draws on literature in the fields of services management, Artificial Intelligence [AI], robotics, Intelligence Augmentation [IA] and Human Intelligence [HIs] to conceptualise potential cyborgian applications. Findings The paper examines how cyborg bio- and psychophysical characteristics may significantly differentiate the nature of service interactions from traditional ‘unenhanced’ service interactions. In doing so, we propose ‘melding’ as a conceptual category of technological impact on FLEs. This category reflects the embodiment of emergent technologies not previously captured within existing literature on cyborgs. We examine how traditional roles of FLEs will be potentially impacted by the integration of emergent cyborg technologies, such as neural interfaces and implants, into service contexts before outlining future research directions related to these, specifically highlighting the range of ethical considerations. Originality/Value Service interactions with cyborg FLEs represent a new context for examining the potential impact of cyborgs. This paper explores how technological advancements will alter the individual capacities of humans to enable such employees to intuitively and empathetically create solutions to complex service challenges. In doing so, we augment the extant literature on cyborgs, such as the body hacking movement. The paper also outlines a research agenda to address the potential consequences of cyborgian integration

    Sharing Stress With a Robot: What Would a Robot Say?

    Get PDF
    With the prevalence of mental health problems today, designing human-robot interaction for mental health intervention is not only possible, but critical. The current experiment examined how three types of robot disclosure (emotional, technical, and by-proxy) affect robot perception and human disclosure behavior during a stress-sharing activity. Emotional robot disclosure resulted in the lowest robot perceived safety. Post-hoc analysis revealed that increased perceived stress predicted reduced human disclosure, user satisfaction, robot likability, and future robot use. Negative attitudes toward robots also predicted reduced intention for future robot use. This work informs on the possible design of robot disclosure, as well as how individual attributes, such as perceived stress, can impact human robot interaction in a mental health context
    • 

    corecore