582 research outputs found

    Towards the improvement of self-service systems via emotional virtual agents

    Get PDF
    Affective computing and emotional agents have been found to have a positive effect on human-computer interactions. In order to develop an acceptable emotional agent for use in a self-service interaction, two stages of research were identified and carried out; the first to determine which facial expressions are present in such an interaction and the second to determine which emotional agent behaviours are perceived as appropriate during a problematic self-service shopping task. In the first stage, facial expressions associated with negative affect were found to occur during self-service shopping interactions, indicating that facial expression detection is suitable for detecting negative affective states during self-service interactions. In the second stage, user perceptions of the emotional facial expressions displayed by an emotional agent during a problematic self-service interaction were gathered. Overall, the expression of disgust was found to be perceived as inappropriate while emotionally neutral behaviour was perceived as appropriate, however gender differences suggested that females perceived surprise as inappropriate. Results suggest that agents should change their behaviour and appearance based on user characteristics such as gender

    On the profoundness and preconditions of social responses towards social robots : experimental investigations using indirect measurement techniques

    Get PDF
    Riether N. On the profoundness and preconditions of social responses towards social robots : experimental investigations using indirect measurement techniques. Bielefeld: Universitรคt Bielefeld; 2013

    Human-Machine Communication: Complete Volume. Volume 6

    Get PDF
    his is the complete volume of HMC Volume 6

    Understanding the neural mechanisms of empathy toward robots to shape future applications

    Get PDF
    This article provides an overview on how modern neuroscience evaluations link to robot empathy. It evaluates the brain correlates of empathy and caregiving, and how they may be related to the higher functions with an emphasis on women. We discuss that the understanding of the brain correlates can inform the development of social robots with enhanced empathy and caregiving abilities. We propose that the availability of these robots will benefit many aspects of the society including transition to parenthood and parenting, in which women are deeply involved in real life and scientific research. We conclude with some of the barriers for women in the field and how robotics and robot empathy research benefits from a broad representation of researchers

    Developing an Affect-Aware Rear-Projected Robotic Agent

    Get PDF
    Social (or Sociable) robots are designed to interact with people in a natural and interpersonal manner. They are becoming an integrated part of our daily lives and have achieved positive outcomes in several applications such as education, health care, quality of life, entertainment, etc. Despite significant progress towards the development of realistic social robotic agents, a number of problems remain to be solved. First, current social robots either lack enough ability to have deep social interaction with human, or they are very expensive to build and maintain. Second, current social robots have yet to reach the full emotional and social capabilities necessary for rich and robust interaction with human beings. To address these problems, this dissertation presents the development of a low-cost, flexible, affect-aware rear-projected robotic agent (called ExpressionBot), that is designed to support verbal and non-verbal communication between the robot and humans, with the goal of closely modeling the dynamics of natural face-to-face communication. The developed robotic platform uses state-of-the-art character animation technologies to create an animated human face (aka avatar) that is capable of showing facial expressions, realistic eye movement, and accurate visual speech, and then project this avatar onto a face-shaped translucent mask. The mask and the projector are then rigged onto a neck mechanism that can move like a human head. Since an animation is projected onto a mask, the robotic face is highly flexible research tool, mechanically simple, and low-cost to design, build and maintain compared with mechatronic and android faces. The results of our comprehensive Human-Robot Interaction (HRI) studies illustrate the benefits and values of the proposed rear-projected robotic platform over a virtual-agent with the same animation displayed on a 2D computer screen. The results indicate that ExpressionBot is well accepted by users, with some advantages in expressing facial expressions more accurately and perceiving mutual eye gaze contact. To improve social capabilities of the robot and create an expressive and empathic social agent (affect-aware) which is capable of interpreting users\u27 emotional facial expressions, we developed a new Deep Neural Networks (DNN) architecture for Facial Expression Recognition (FER). The proposed DNN was initially trained on seven well-known publicly available databases, and obtained significantly better than, or comparable to, traditional convolutional neural networks or other state-of-the-art methods in both accuracy and learning time. Since the performance of the automated FER system highly depends on its training data, and the eventual goal of the proposed robotic platform is to interact with users in an uncontrolled environment, a database of facial expressions in the wild (called AffectNet) was created by querying emotion-related keywords from different search engines. AffectNet contains more than 1M images with faces and 440,000 manually annotated images with facial expressions, valence, and arousal. Two DNNs were trained on AffectNet to classify the facial expression images and predict the value of valence and arousal. Various evaluation metrics show that our deep neural network approaches trained on AffectNet can perform better than conventional machine learning methods and available off-the-shelf FER systems. We then integrated this automated FER system into spoken dialog of our robotic platform to extend and enrich the capabilities of ExpressionBot beyond spoken dialog and create an affect-aware robotic agent that can measure and infer users\u27 affect and cognition. Three social/interaction aspects (task engagement, being empathic, and likability of the robot) are measured in an experiment with the affect-aware robotic agent. The results indicate that users rated our affect-aware agent as empathic and likable as a robot in which user\u27s affect is recognized by a human (WoZ). In summary, this dissertation presents the development and HRI studies of a perceptive, and expressive, conversational, rear-projected, life-like robotic agent (aka ExpressionBot or Ryan) that models natural face-to-face communication between human and emapthic agent. The results of our in-depth human-robot-interaction studies show that this robotic agent can serve as a model for creating the next generation of empathic social robots

    ์ฑ—๋ด‡์ด ์‹ ๋ขฐ ์œ„๋ฐ˜์œผ๋กœ๋ถ€ํ„ฐ ํšŒ๋ณตํ•˜๋Š” ๋ฐ ์‚ฌ๊ณผ์™€ ๊ณต๊ฐ์ด ๋ฏธ์น˜๋Š” ์˜ํ–ฅ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(์„์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ์‚ฌํšŒ๊ณผํ•™๋Œ€ํ•™ ์‹ฌ๋ฆฌํ•™๊ณผ, 2022. 8. ํ•œ์†Œ์›.In the present study, we investigated how chatbots can recover user trust after making errors. In two experiments, participants had a conversation with a chatbot about their daily lives and personal goals. After giving an inadequate response to the userโ€™s negative sentiments, the chatbot apologized using internal or external error attribution and various levels of empathy. Study 1 showed that the type of apology did not affect usersโ€™ trust or the chatbotโ€™s perceived competence, warmth, or discomfort. Study 2 showed that short apologies increased trust and perceived competence of the chatbot compared to long apologies. In addition, apologies with internal attribution increased the perceived competence of the chatbot. The perceived comfort of the chatbot increased when apologies with internal attribution were longer as well as when apologies with external attribution were shorter. However, in both Study 1 and Study 2, the apology conditions did not significantly increase usersโ€™ trust or positively affect their perception of the chatbot in comparison to the no-apology condition. Our research provides practical guidelines for designing error recovery strategies for chatbots. The findings demonstrate that Human-Robot Interaction may require an approach to trust recovery that differs from Human-Human Interaction.๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ์ฑ—๋ด‡์ด ๋Œ€ํ™” ์ค‘ ์˜ค๋ฅ˜๊ฐ€ ์žˆ์—ˆ์„ ๋•Œ ์‚ฌ์šฉ์ž์˜ ์‹ ๋ขฐ๋ฅผ ํšŒ๋ณตํ•  ์ˆ˜ ์žˆ๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•˜์—ฌ ํƒ์ƒ‰ํ•˜์˜€๋‹ค. ๋‘ ๋ฒˆ์˜ ์‹คํ—˜์—์„œ ์ฐธ์—ฌ์ž๋“ค์€ ์ผ์ƒ์ƒํ™œ๊ณผ ์ž์‹ ์˜ ๋ชฉํ‘œ์— ๊ด€ํ•˜์—ฌ ์ฑ—๋ด‡๊ณผ ๋Œ€ํ™”๋ฅผ ๋‚˜๋ˆ„์—ˆ๋‹ค. ์ฑ—๋ด‡์€ ์ฐธ์—ฌ์ž์˜ ๋ถ€์ •์  ๊ฐ์ •์— ๋Œ€ํ•ด ๋ถ€์ ์ ˆํ•œ ์‘๋‹ต์„ ํ•œ ํ›„, ๊ณต๊ฐ ์ˆ˜์ค€์„ ๋‹ฌ๋ฆฌํ•˜๋ฉฐ ๋‚ด์  ๊ท€์ธ ํ˜น์€ ์™ธ์  ๊ท€์ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์‚ฌ๊ณผํ–ˆ๋‹ค. ์—ฐ๊ตฌ 1์— ๋”ฐ๋ฅด๋ฉด ์‚ฌ๊ณผ์˜ ์ข…๋ฅ˜๋Š” ์‚ฌ์šฉ์ž์˜ ์‹ ๋ขฐ๋‚˜ ์ฑ—๋ด‡์˜ ์ง€๊ฐ๋œ ์œ ๋Šฅํ•จ, ๋”ฐ๋œปํ•จ, ๋ถˆํŽธ๊ฐ์— ์œ ์˜๋ฏธํ•œ ์˜ํ–ฅ์„ ์ฃผ์ง€ ์•Š์•˜๋‹ค. ์—ฐ๊ตฌ 2 ๊ฒฐ๊ณผ ์งง์€ ์‚ฌ๊ณผ๋Š” ๊ธด ์‚ฌ๊ณผ๋ณด๋‹ค ์ฑ—๋ด‡์— ๋Œ€ํ•œ ์‚ฌ์šฉ์ž์˜ ์‹ ๋ขฐ์™€ ์ง€๊ฐ๋œ ์œ ๋Šฅํ•จ์„ ๋” ํฌ๊ฒŒ ๋†’์˜€๋‹ค. ๋˜ํ•œ, ๋‚ด์  ๊ท€์ธ์„ ์‚ฌ์šฉํ•˜๋Š” ์‚ฌ๊ณผ๊ฐ€ ์ฑ—๋ด‡์˜ ์ง€๊ฐ๋œ ์œ ๋Šฅํ•จ์„ ๋” ํฌ๊ฒŒ ํ–ฅ์ƒ์‹œ์ผฐ๋‹ค. ๋‚ด์  ๊ท€์ธ์„ ์‚ฌ์šฉํ•˜๋Š” ์‚ฌ๊ณผ์˜ ๊ฒฝ์šฐ ๊ธธ์ด๊ฐ€ ๊ธธ ๋•Œ, ์™ธ์  ๊ท€์ธ์„ ์‚ฌ์šฉํ•˜๋Š” ์‚ฌ๊ณผ์˜ ๊ฒฝ์šฐ ๊ธธ์ด๊ฐ€ ์งง์„ ๋•Œ ์‚ฌ์šฉ์ž๋“ค์—๊ฒŒ ๋” ํŽธ์•ˆํ•˜๊ฒŒ ๋Š๊ปด์กŒ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์—ฐ๊ตฌ 1๊ณผ ์—ฐ๊ตฌ 2 ๋ชจ๋‘์—์„œ ์‚ฌ๊ณผ ์กฐ๊ฑด์€ ์‚ฌ์šฉ์ž์˜ ์‹ ๋ขฐ๋ฅผ ์œ ์˜๋ฏธํ•˜๊ฒŒ ์ฆ๊ฐ€์‹œํ‚ค๊ฑฐ๋‚˜ ์ฑ—๋ด‡์˜ ์ธ์‹์— ์œ ์˜๋ฏธํ•˜๊ฒŒ ๊ธ์ •์ ์ธ ์˜ํ–ฅ์„ ๋ฏธ์น˜์ง€ ์•Š์•˜๋‹ค. ๋ณธ ์—ฐ๊ตฌ๋Š” ์ฑ—๋ด‡ ์˜ค๋ฅ˜๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•œ ์‹ ๋ขฐ ํšŒ๋ณต ์ „๋žต์„ ์ˆ˜๋ฆฝํ•˜๊ธฐ ์œ„ํ•œ ์‹ค์šฉ์ ์ธ ์ง€์นจ์„ ์ œ๊ณตํ•œ๋‹ค. ๋˜ํ•œ, ๋ณธ ์—ฐ๊ตฌ ๊ฒฐ๊ณผ๋Š” ์ธ๊ฐ„-๋กœ๋ด‡ ์ƒํ˜ธ์ž‘์šฉ์—์„œ ์š”๊ตฌ๋˜๋Š” ์‹ ๋ขฐ ํšŒ๋ณต ์ „๋žต์€ ์ธ๊ฐ„-์ธ๊ฐ„ ์ƒํ˜ธ ์ž‘์šฉ์—์„œ ์‚ฌ์šฉ๋˜๋Š” ์ „๋žต๊ณผ๋Š” ์ƒ์ดํ•  ์ˆ˜ ์žˆ์Œ์„ ๋ณด์—ฌ์ค€๋‹ค.Abstract i Table of Contents ii List of Tables iii List of Figures iii Chapter 1. Introduction 1 1. Motivation 1 2. Previous Research 2 3. Purpose of Study 11 Chapter 2. Study 1 12 1. Hypotheses 12 2. Methods 12 3. Results 18 4. Discussion 23 Chapter 3. Study 2 25 1. Hypotheses 25 2. Methods 26 3. Results 30 4. Discussion 38 Chapter 4. Conclusion 40 Chapter 5. General Discussion 42 References 46 Appendix 54 ๊ตญ๋ฌธ์ดˆ๋ก 65์„

    Sympathy for Dolores: Moral Consideration for Robots Based on Virtue and Recognition

    Get PDF
    This paper motivates the idea that social robots should be credited as moral patients, building on an argumentative approach that combines virtue ethics and social recognition theory. Our proposal answers the call for a nuanced ethical evaluation of human-robot interaction that does justice to both the robustness of the social responses solicited in humans by robots and the fact that robots are designed to be used as instruments. On the one hand, we acknowledge that the instrumental nature of robots and their unsophisticated social capabilities prevent any attribution of rights to robots, which are devoid of intrinsic moral dignity and personal status. On the other hand, we argue that another form of moral considerationโ€”not based on rights attributionโ€”can and must be granted to robots. The reason is that relationships with robots offer to the human agents important opportunities to cultivate both vices and virtues, like social interaction with other human beings. Our argument appeals to social recognition to explain why social robots, unlike other technological artifacts, are capable of establishing with their human users quasi-social relationships as pseudo-persons. This recognition dynamic justifies seeing robots as worthy of moral consideration from a virtue ethical standpoint as it predicts the pre-reflective formation of persistent affective dispositions and behavioral habits that are capable of corrupting the human userโ€™s character. We conclude by drawing attention to a potential paradox drawn forth by our analysis and by examining the main conceptual conundrums that our approach has to face

    Perceiving Sociable Technology: Exploring the Role of Anthropomorphism and Agency Perception on Human-Computer Interaction (HCI)

    Get PDF
    With the arrival of personal assistants and other AI-enabled autonomous technologies, social interactions with smart devices have become a part of our daily lives. Therefore, it becomes increasingly important to understand how these social interactions emerge, and why users appear to be influenced by them. For this reason, I explore questions on what the antecedents and consequences of this phenomenon, known as anthropomorphism, are as described in the extant literature from fields ranging from information systems to social neuroscience. I critically analyze those empirical studies directly measuring anthropomorphism and those referring to it without a corresponding measurement. Through a grounded theory approach, I identify common themes and use them to develop models for the antecedents and consequences of anthropomorphism. The results suggest anthropomorphism possesses both conscious and non-conscious components with varying implications. While conscious attributions are shown to vary based on individual differences, non-conscious attributions emerge whenever a technology exhibits apparent reasoning such as through non-verbal behavior like peer-to-peer mirroring or verbal paralinguistic and backchanneling cues. Anthropomorphism has been shown to affect usersโ€™ self-perceptions, perceptions of the technology, how users interact with the technology, and the usersโ€™ performance. Examples include changes in a usersโ€™ trust on the technology, conformity effects, bonding, and displays of empathy. I argue these effects emerge from changes in usersโ€™ perceived agency, and their self- and social- identity similarly to interactions between humans. Afterwards, I critically examine current theories on anthropomorphism and present propositions about its nature based on the results of the empirical literature. Subsequently, I introduce a two-factor model of anthropomorphism that proposes how an individual anthropomorphizes a technology is dependent on how the technology was initially perceived (top-down and rational or bottom-up and automatic), and whether it exhibits a capacity for agency or experience. I propose that where a technology lays along this spectrum determines how individuals relates to it, creating shared agency effects, or changing the usersโ€™ social identity. For this reason, anthropomorphism is a powerful tool that can be leveraged to support future interactions with smart technologies
    • โ€ฆ
    corecore