213 research outputs found

    INTRODUCTION

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/70110/2/PFLDAS-12-5-iii-1.pd

    ์ฑ—๋ด‡์ด ์‹ ๋ขฐ ์œ„๋ฐ˜์œผ๋กœ๋ถ€ํ„ฐ ํšŒ๋ณตํ•˜๋Š” ๋ฐ ์‚ฌ๊ณผ์™€ ๊ณต๊ฐ์ด ๋ฏธ์น˜๋Š” ์˜ํ–ฅ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(์„์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ์‚ฌํšŒ๊ณผํ•™๋Œ€ํ•™ ์‹ฌ๋ฆฌํ•™๊ณผ, 2022. 8. ํ•œ์†Œ์›.In the present study, we investigated how chatbots can recover user trust after making errors. In two experiments, participants had a conversation with a chatbot about their daily lives and personal goals. After giving an inadequate response to the userโ€™s negative sentiments, the chatbot apologized using internal or external error attribution and various levels of empathy. Study 1 showed that the type of apology did not affect usersโ€™ trust or the chatbotโ€™s perceived competence, warmth, or discomfort. Study 2 showed that short apologies increased trust and perceived competence of the chatbot compared to long apologies. In addition, apologies with internal attribution increased the perceived competence of the chatbot. The perceived comfort of the chatbot increased when apologies with internal attribution were longer as well as when apologies with external attribution were shorter. However, in both Study 1 and Study 2, the apology conditions did not significantly increase usersโ€™ trust or positively affect their perception of the chatbot in comparison to the no-apology condition. Our research provides practical guidelines for designing error recovery strategies for chatbots. The findings demonstrate that Human-Robot Interaction may require an approach to trust recovery that differs from Human-Human Interaction.๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ์ฑ—๋ด‡์ด ๋Œ€ํ™” ์ค‘ ์˜ค๋ฅ˜๊ฐ€ ์žˆ์—ˆ์„ ๋•Œ ์‚ฌ์šฉ์ž์˜ ์‹ ๋ขฐ๋ฅผ ํšŒ๋ณตํ•  ์ˆ˜ ์žˆ๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•˜์—ฌ ํƒ์ƒ‰ํ•˜์˜€๋‹ค. ๋‘ ๋ฒˆ์˜ ์‹คํ—˜์—์„œ ์ฐธ์—ฌ์ž๋“ค์€ ์ผ์ƒ์ƒํ™œ๊ณผ ์ž์‹ ์˜ ๋ชฉํ‘œ์— ๊ด€ํ•˜์—ฌ ์ฑ—๋ด‡๊ณผ ๋Œ€ํ™”๋ฅผ ๋‚˜๋ˆ„์—ˆ๋‹ค. ์ฑ—๋ด‡์€ ์ฐธ์—ฌ์ž์˜ ๋ถ€์ •์  ๊ฐ์ •์— ๋Œ€ํ•ด ๋ถ€์ ์ ˆํ•œ ์‘๋‹ต์„ ํ•œ ํ›„, ๊ณต๊ฐ ์ˆ˜์ค€์„ ๋‹ฌ๋ฆฌํ•˜๋ฉฐ ๋‚ด์  ๊ท€์ธ ํ˜น์€ ์™ธ์  ๊ท€์ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์‚ฌ๊ณผํ–ˆ๋‹ค. ์—ฐ๊ตฌ 1์— ๋”ฐ๋ฅด๋ฉด ์‚ฌ๊ณผ์˜ ์ข…๋ฅ˜๋Š” ์‚ฌ์šฉ์ž์˜ ์‹ ๋ขฐ๋‚˜ ์ฑ—๋ด‡์˜ ์ง€๊ฐ๋œ ์œ ๋Šฅํ•จ, ๋”ฐ๋œปํ•จ, ๋ถˆํŽธ๊ฐ์— ์œ ์˜๋ฏธํ•œ ์˜ํ–ฅ์„ ์ฃผ์ง€ ์•Š์•˜๋‹ค. ์—ฐ๊ตฌ 2 ๊ฒฐ๊ณผ ์งง์€ ์‚ฌ๊ณผ๋Š” ๊ธด ์‚ฌ๊ณผ๋ณด๋‹ค ์ฑ—๋ด‡์— ๋Œ€ํ•œ ์‚ฌ์šฉ์ž์˜ ์‹ ๋ขฐ์™€ ์ง€๊ฐ๋œ ์œ ๋Šฅํ•จ์„ ๋” ํฌ๊ฒŒ ๋†’์˜€๋‹ค. ๋˜ํ•œ, ๋‚ด์  ๊ท€์ธ์„ ์‚ฌ์šฉํ•˜๋Š” ์‚ฌ๊ณผ๊ฐ€ ์ฑ—๋ด‡์˜ ์ง€๊ฐ๋œ ์œ ๋Šฅํ•จ์„ ๋” ํฌ๊ฒŒ ํ–ฅ์ƒ์‹œ์ผฐ๋‹ค. ๋‚ด์  ๊ท€์ธ์„ ์‚ฌ์šฉํ•˜๋Š” ์‚ฌ๊ณผ์˜ ๊ฒฝ์šฐ ๊ธธ์ด๊ฐ€ ๊ธธ ๋•Œ, ์™ธ์  ๊ท€์ธ์„ ์‚ฌ์šฉํ•˜๋Š” ์‚ฌ๊ณผ์˜ ๊ฒฝ์šฐ ๊ธธ์ด๊ฐ€ ์งง์„ ๋•Œ ์‚ฌ์šฉ์ž๋“ค์—๊ฒŒ ๋” ํŽธ์•ˆํ•˜๊ฒŒ ๋Š๊ปด์กŒ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์—ฐ๊ตฌ 1๊ณผ ์—ฐ๊ตฌ 2 ๋ชจ๋‘์—์„œ ์‚ฌ๊ณผ ์กฐ๊ฑด์€ ์‚ฌ์šฉ์ž์˜ ์‹ ๋ขฐ๋ฅผ ์œ ์˜๋ฏธํ•˜๊ฒŒ ์ฆ๊ฐ€์‹œํ‚ค๊ฑฐ๋‚˜ ์ฑ—๋ด‡์˜ ์ธ์‹์— ์œ ์˜๋ฏธํ•˜๊ฒŒ ๊ธ์ •์ ์ธ ์˜ํ–ฅ์„ ๋ฏธ์น˜์ง€ ์•Š์•˜๋‹ค. ๋ณธ ์—ฐ๊ตฌ๋Š” ์ฑ—๋ด‡ ์˜ค๋ฅ˜๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•œ ์‹ ๋ขฐ ํšŒ๋ณต ์ „๋žต์„ ์ˆ˜๋ฆฝํ•˜๊ธฐ ์œ„ํ•œ ์‹ค์šฉ์ ์ธ ์ง€์นจ์„ ์ œ๊ณตํ•œ๋‹ค. ๋˜ํ•œ, ๋ณธ ์—ฐ๊ตฌ ๊ฒฐ๊ณผ๋Š” ์ธ๊ฐ„-๋กœ๋ด‡ ์ƒํ˜ธ์ž‘์šฉ์—์„œ ์š”๊ตฌ๋˜๋Š” ์‹ ๋ขฐ ํšŒ๋ณต ์ „๋žต์€ ์ธ๊ฐ„-์ธ๊ฐ„ ์ƒํ˜ธ ์ž‘์šฉ์—์„œ ์‚ฌ์šฉ๋˜๋Š” ์ „๋žต๊ณผ๋Š” ์ƒ์ดํ•  ์ˆ˜ ์žˆ์Œ์„ ๋ณด์—ฌ์ค€๋‹ค.Abstract i Table of Contents ii List of Tables iii List of Figures iii Chapter 1. Introduction 1 1. Motivation 1 2. Previous Research 2 3. Purpose of Study 11 Chapter 2. Study 1 12 1. Hypotheses 12 2. Methods 12 3. Results 18 4. Discussion 23 Chapter 3. Study 2 25 1. Hypotheses 25 2. Methods 26 3. Results 30 4. Discussion 38 Chapter 4. Conclusion 40 Chapter 5. General Discussion 42 References 46 Appendix 54 ๊ตญ๋ฌธ์ดˆ๋ก 65์„

    Having The Right Attitude: How Attitude Impacts Trust Repair in Human-Robot Interaction,

    Full text link
    Robot co-workers, like human co-workers, make mistakes that undermine trust. Yet, trust is just as important in promoting humanโ€“robot collaboration as it is in promoting humanโ€“human collaboration. In addition, individuals can significantly differ in their attitudes toward robots, which can also impact or hinder their trust in robots. To better understand how individual attitude can influence trust repair strategies, we propose a theoretical model that draws from the theory of cognitive dissonance. To empirically verify this model, we conducted a between-subjects experiment with 100 participants assigned to one of four repair strategies (apologies, denials, explanations, or promises) over three trust violations. Individual attitudes did moderate the efficacy of repair strategies and this effect differed over successive trust violations. Specifically, repair strategies were most effective relative to individual attitude during the second of the three trust violations, and promises were the trust repair strategy most impacted by an individualโ€™s attitude.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/171268/1/Esterwood and Roboert 2022 HRI.pdfDescription of Esterwood and Roboert 2022 HRI.pdf : PreprintSEL

    Human-Machine Communication: Complete Volume. Volume 6

    Get PDF
    his is the complete volume of HMC Volume 6

    Designing Embodied Interactive Software Agents for E-Learning: Principles, Components, and Roles

    Get PDF
    Embodied interactive software agents are complex autonomous, adaptive, and social software systems with a digital embodiment that enables them to act on and react to other entities (users, objects, and other agents) in their environment through bodily actions, which include the use of verbal and non-verbal communicative behaviors in face-to-face interactions with the user. These agents have been developed for various roles in different application domains, in which they perform tasks that have been assigned to them by their developers or delegated to them by their users or by other agents. In computer-assisted learning, embodied interactive pedagogical software agents have the general task to promote human learning by working with students (and other agents) in computer-based learning environments, among them e-learning platforms based on Internet technologies, such as the Virtual Linguistics Campus (www.linguistics-online.com). In these environments, pedagogical agents provide contextualized, qualified, personalized, and timely assistance, cooperation, instruction, motivation, and services for both individual learners and groups of learners. This thesis develops a comprehensive, multidisciplinary, and user-oriented view of the design of embodied interactive pedagogical software agents, which integrates theoretical and practical insights from various academic and other fields. The research intends to contribute to the scientific understanding of issues, methods, theories, and technologies that are involved in the design, implementation, and evaluation of embodied interactive software agents for different roles in e-learning and other areas. For developers, the thesis provides sixteen basic principles (Added Value, Perceptible Qualities, Balanced Design, Coherence, Consistency, Completeness, Comprehensibility, Individuality, Variability, Communicative Ability, Modularity, Teamwork, Participatory Design, Role Awareness, Cultural Awareness, and Relationship Building) plus a large number of specific guidelines for the design of embodied interactive software agents and their components. Furthermore, it offers critical reviews of theories, concepts, approaches, and technologies from different areas and disciplines that are relevant to agent design. Finally, it discusses three pedagogical agent roles (virtual native speaker, coach, and peer) in the scenario of the linguistic fieldwork classes on the Virtual Linguistics Campus and presents detailed considerations for the design of an agent for one of these roles (the virtual native speaker)

    Determining the effect of human cognitive biases in social robots for human-robotm interactions

    Get PDF
    The research presented in this thesis describes a model for aiding human-robot interactions based on the principle of showing behaviours which are created based on 'human' cognitive biases by a robot in human-robot interactions. The aim of this work is to study how cognitive biases can affect human-robot interactions in the long term. Currently, most human-robot interactions are based on a set of well-ordered and structured rules, which repeat regardless of the person or social situation. This trend tends to provide an unrealistic interaction, which can make difficult for humans to relate โ€˜naturallyโ€™ with the social robot after a number of relations. The main focus of these interactions is that the social robot shows a very structured set of behaviours and, as such, acts unnaturally and mechanical in terms of social interactions. On the other hand, fallible behaviours (e.g. forgetfulness, inability to understand otherโ€™ emotions, bragging, blaming others) are common behaviours in humans and can be seen in regular social interactions. Some of these fallible behaviours are caused by the various cognitive biases. Researchers studied and developed various humanlike skills (e.g. personality, emotions expressions, traits) in social robots to make their behaviours more humanlike, and as a result, social robots can perform various humanlike actions, such as walking, talking, gazing or emotional expression. But common human behaviours such as forgetfulness, inability to understand other emotions, bragging or blaming are not present in the current social robots; such behaviours which exist and influence people have not been explored in social robots. The study presented in this thesis developed five cognitive biases in three different robots in four separate experiments to understand the influences of such cognitive biases in humanโ€“robot interactions. The results show that participants initially liked to interact with the robot with cognitive biased behaviours more than the robot without such behaviours. In my first two experiments, the robots (e.g., ERWIN, MyKeepon) interacted with the participants using a single bias (i.e., misattribution and empathy gap) cognitive biases accordingly, and participants enjoyed the interactions using such bias effects: for example, forgetfulness, source confusions, always showing exaggerated happiness or sadness and so on in the robots. In my later experiments, participants interacted with the robot (e.g., MARC) three times, with a time interval between two interactions, and results show that the likeness the interactions where the robot shows biased behaviours decreases less than the interactions where the robot did not show any biased behaviours. In the current thesis, I describe the investigations of these traits of forgetfulness, the inability to understand othersโ€™ emotions, and bragging and blaming behaviours, which are influenced by cognitive biases, and I also analyse peopleโ€™s responses to robots displaying such biased behaviours in humanโ€“robot interactions

    Service Failure Management in High-End Hospitality Resorts

    Get PDF
    The purpose of this study was to better understand the interactions that occur at high-end resorts during service failures that guests sometimes experience during their stay. Both service-failure managers and guests who had experienced service failures during their stay at a high-end resort were interviewed to examine the service recovery techniques and timing strategies (such as the ex-antecrisis crisis timing strategy) that are perceived to be the best methods to correct service failures during a guestโ€™s stay. In comparing the responses from service recovery managers and guests, commonalities were found regarding the best practices for ways to treat guests during service failures, and the pre-existing understandings that service recovery mangers have about techniques that have effectively resolved service failures in the past
    • โ€ฆ
    corecore