213 research outputs found
INTRODUCTION
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/70110/2/PFLDAS-12-5-iii-1.pd
์ฑ๋ด์ด ์ ๋ขฐ ์๋ฐ์ผ๋ก๋ถํฐ ํ๋ณตํ๋ ๋ฐ ์ฌ๊ณผ์ ๊ณต๊ฐ์ด ๋ฏธ์น๋ ์ํฅ
ํ์๋
ผ๋ฌธ(์์ฌ) -- ์์ธ๋ํ๊ต๋ํ์ : ์ฌํ๊ณผํ๋ํ ์ฌ๋ฆฌํ๊ณผ, 2022. 8. ํ์์.In the present study, we investigated how chatbots can recover user trust after making errors. In two experiments, participants had a conversation with a chatbot about their daily lives and personal goals. After giving an inadequate response to the userโs negative sentiments, the chatbot apologized using internal or external error attribution and various levels of empathy. Study 1 showed that the type of apology did not affect usersโ trust or the chatbotโs perceived competence, warmth, or discomfort. Study 2 showed that short apologies increased trust and perceived competence of the chatbot compared to long apologies. In addition, apologies with internal attribution increased the perceived competence of the chatbot. The perceived comfort of the chatbot increased when apologies with internal attribution were longer as well as when apologies with external attribution were shorter. However, in both Study 1 and Study 2, the apology conditions did not significantly increase usersโ trust or positively affect their perception of the chatbot in comparison to the no-apology condition.
Our research provides practical guidelines for designing error recovery strategies for chatbots. The findings demonstrate that Human-Robot Interaction may require an approach to trust recovery that differs from Human-Human Interaction.๋ณธ ์ฐ๊ตฌ์์๋ ์ฑ๋ด์ด ๋ํ ์ค ์ค๋ฅ๊ฐ ์์์ ๋ ์ฌ์ฉ์์ ์ ๋ขฐ๋ฅผ ํ๋ณตํ ์ ์๋ ๋ฐฉ๋ฒ์ ๋ํ์ฌ ํ์ํ์๋ค. ๋ ๋ฒ์ ์คํ์์ ์ฐธ์ฌ์๋ค์ ์ผ์์ํ๊ณผ ์์ ์ ๋ชฉํ์ ๊ดํ์ฌ ์ฑ๋ด๊ณผ ๋ํ๋ฅผ ๋๋์๋ค. ์ฑ๋ด์ ์ฐธ์ฌ์์ ๋ถ์ ์ ๊ฐ์ ์ ๋ํด ๋ถ์ ์ ํ ์๋ต์ ํ ํ, ๊ณต๊ฐ ์์ค์ ๋ฌ๋ฆฌํ๋ฉฐ ๋ด์ ๊ท์ธ ํน์ ์ธ์ ๊ท์ธ์ ์ฌ์ฉํ์ฌ ์ฌ๊ณผํ๋ค. ์ฐ๊ตฌ 1์ ๋ฐ๋ฅด๋ฉด ์ฌ๊ณผ์ ์ข
๋ฅ๋ ์ฌ์ฉ์์ ์ ๋ขฐ๋ ์ฑ๋ด์ ์ง๊ฐ๋ ์ ๋ฅํจ, ๋ฐ๋ปํจ, ๋ถํธ๊ฐ์ ์ ์๋ฏธํ ์ํฅ์ ์ฃผ์ง ์์๋ค. ์ฐ๊ตฌ 2 ๊ฒฐ๊ณผ ์งง์ ์ฌ๊ณผ๋ ๊ธด ์ฌ๊ณผ๋ณด๋ค ์ฑ๋ด์ ๋ํ ์ฌ์ฉ์์ ์ ๋ขฐ์ ์ง๊ฐ๋ ์ ๋ฅํจ์ ๋ ํฌ๊ฒ ๋์๋ค. ๋ํ, ๋ด์ ๊ท์ธ์ ์ฌ์ฉํ๋ ์ฌ๊ณผ๊ฐ ์ฑ๋ด์ ์ง๊ฐ๋ ์ ๋ฅํจ์ ๋ ํฌ๊ฒ ํฅ์์์ผฐ๋ค. ๋ด์ ๊ท์ธ์ ์ฌ์ฉํ๋ ์ฌ๊ณผ์ ๊ฒฝ์ฐ ๊ธธ์ด๊ฐ ๊ธธ ๋, ์ธ์ ๊ท์ธ์ ์ฌ์ฉํ๋ ์ฌ๊ณผ์ ๊ฒฝ์ฐ ๊ธธ์ด๊ฐ ์งง์ ๋ ์ฌ์ฉ์๋ค์๊ฒ ๋ ํธ์ํ๊ฒ ๋๊ปด์ก๋ค. ๊ทธ๋ฌ๋ ์ฐ๊ตฌ 1๊ณผ ์ฐ๊ตฌ 2 ๋ชจ๋์์ ์ฌ๊ณผ ์กฐ๊ฑด์ ์ฌ์ฉ์์ ์ ๋ขฐ๋ฅผ ์ ์๋ฏธํ๊ฒ ์ฆ๊ฐ์ํค๊ฑฐ๋ ์ฑ๋ด์ ์ธ์์ ์ ์๋ฏธํ๊ฒ ๊ธ์ ์ ์ธ ์ํฅ์ ๋ฏธ์น์ง ์์๋ค.
๋ณธ ์ฐ๊ตฌ๋ ์ฑ๋ด ์ค๋ฅ๋ฅผ ํด๊ฒฐํ๊ธฐ ์ํ ์ ๋ขฐ ํ๋ณต ์ ๋ต์ ์๋ฆฝํ๊ธฐ ์ํ ์ค์ฉ์ ์ธ ์ง์นจ์ ์ ๊ณตํ๋ค. ๋ํ, ๋ณธ ์ฐ๊ตฌ ๊ฒฐ๊ณผ๋ ์ธ๊ฐ-๋ก๋ด ์ํธ์์ฉ์์ ์๊ตฌ๋๋ ์ ๋ขฐ ํ๋ณต ์ ๋ต์ ์ธ๊ฐ-์ธ๊ฐ ์ํธ ์์ฉ์์ ์ฌ์ฉ๋๋ ์ ๋ต๊ณผ๋ ์์ดํ ์ ์์์ ๋ณด์ฌ์ค๋ค.Abstract i
Table of Contents ii
List of Tables iii
List of Figures iii
Chapter 1. Introduction 1
1. Motivation 1
2. Previous Research 2
3. Purpose of Study 11
Chapter 2. Study 1 12
1. Hypotheses 12
2. Methods 12
3. Results 18
4. Discussion 23
Chapter 3. Study 2 25
1. Hypotheses 25
2. Methods 26
3. Results 30
4. Discussion 38
Chapter 4. Conclusion 40
Chapter 5. General Discussion 42
References 46
Appendix 54
๊ตญ๋ฌธ์ด๋ก 65์
Having The Right Attitude: How Attitude Impacts Trust Repair in Human-Robot Interaction,
Robot co-workers, like human co-workers, make mistakes that undermine trust. Yet, trust is just as important in promoting humanโrobot collaboration as it is in promoting humanโhuman collaboration. In addition, individuals can significantly differ in their attitudes toward robots, which can also impact or hinder their trust in robots. To better understand how individual attitude can influence trust repair strategies, we propose a theoretical model that draws from the theory of cognitive dissonance. To empirically verify this model, we conducted a between-subjects experiment with 100 participants assigned to one of four repair strategies (apologies, denials, explanations, or promises) over three trust violations. Individual attitudes did moderate the efficacy of repair strategies and this effect differed over successive trust violations. Specifically, repair strategies were most effective relative to individual attitude during the second of the three trust violations, and promises were the trust repair strategy most impacted by an individualโs attitude.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/171268/1/Esterwood and Roboert 2022 HRI.pdfDescription of Esterwood and Roboert 2022 HRI.pdf : PreprintSEL
Human-Machine Communication: Complete Volume. Volume 6
his is the complete volume of HMC Volume 6
Recommended from our members
Achieving Transparency in Human-Collective Systems
Collective robotic systems are biologically-inspired and exhibit behaviors found in spatial swarms (e.g., fish), colonies (e.g., ants), or a combination of both (e.g., bees). Collective robotic system popularity continues to increase due to their apparent global intelligence and emergent behaviors. Many applications can benefit from the incorporation of collectives, including environmental monitoring, disaster response missions, and infrastructure support. Human-collective system designers continue to debate how best to achieve transparency in human-collective systems in order to attain meaningful and insightful information exchanges between the operator and collective, enable positive operator influence on collectives, and improve the human-collective's performance.
Few human-collective evaluations have been conducted, many of which have only assessed how embedding transparency into one system design element (e.g., models, visualizations, or control mechanisms) may impact human-collective behaviors, such as the human-collective performance. This dissertation developed a transparency definition for collective systems that was leveraged to assess how to achieve transparency in a single human-collective system. Multiple models and visualizations were evaluated for a sequential best-of-\textit{n} decision-making task with four collectives. Transparency was evaluated with respect to how the model and visualization impacted human operators who possess different capabilities, operator comprehension, system usability, and human-collective performance. Transparency design guidance was created in order to aid the design of future human-collective systems. One set of guidelines were inspired from the results and discussions of the single human-collective analyses and another set were based on a review of the biological literature.
This dissertation can be used to aid designers achieve transparency in human-collective systems. The primary contributions are:
1. A transparency definition for human-collective systems that describes the process of identifying what factors affect and are influenced by transparency, why those factors are important, and how to design a system to achieve transparency.
2. An expansive set of metrics that successfully evaluated how transparency influenced operators with different individual capabilities, operator comprehension, system usability, and human-collective performance.
3. The recommendation that system transparency quantification requires evaluating the transparency embedded into the various system design elements in order to determine how they interact with one another and influence the human-collective interactions and performance.
4. Design guidance recommendations with respect to models, visualizations, and control mechanisms in order to inform designers how transparency can be achieved for human-collective systems
Designing Embodied Interactive Software Agents for E-Learning: Principles, Components, and Roles
Embodied interactive software agents are complex autonomous, adaptive, and social software systems with a digital embodiment that enables them to act on and react to other entities (users, objects, and other agents) in their environment through bodily actions, which include the use of verbal and non-verbal communicative behaviors in face-to-face interactions with the user. These agents have been developed for various roles in different application domains, in which they perform tasks that have been assigned to them by their developers or delegated to them by their users or by other agents. In computer-assisted learning, embodied interactive pedagogical software agents have the general task to promote human learning by working with students (and other agents) in computer-based learning environments, among them e-learning platforms based on Internet technologies, such as the Virtual Linguistics Campus (www.linguistics-online.com). In these environments, pedagogical agents provide contextualized, qualified, personalized, and timely assistance, cooperation, instruction, motivation, and services for both individual learners and groups of learners.
This thesis develops a comprehensive, multidisciplinary, and user-oriented view of the design of embodied interactive pedagogical software agents, which integrates theoretical and practical insights from various academic and other fields. The research intends to contribute to the scientific understanding of issues, methods, theories, and technologies that are involved in the design, implementation, and evaluation of embodied interactive software agents for different roles in e-learning and other areas. For developers, the thesis provides sixteen basic principles (Added Value, Perceptible Qualities, Balanced Design, Coherence, Consistency, Completeness, Comprehensibility, Individuality, Variability, Communicative Ability, Modularity, Teamwork, Participatory Design, Role Awareness, Cultural Awareness, and Relationship Building) plus a large number of specific guidelines for the design of embodied interactive software agents and their components. Furthermore, it offers critical reviews of theories, concepts, approaches, and technologies from different areas and disciplines that are relevant to agent design. Finally, it discusses three pedagogical agent roles (virtual native speaker, coach, and peer) in the scenario of the linguistic fieldwork classes on the Virtual Linguistics Campus and presents detailed considerations for the design of an agent for one of these roles (the virtual native speaker)
Determining the effect of human cognitive biases in social robots for human-robotm interactions
The research presented in this thesis describes a model for aiding human-robot interactions based on
the principle of showing behaviours which are created based on 'human' cognitive biases by a robot in
human-robot interactions. The aim of this work is to study how cognitive biases can affect human-robot
interactions in the long term.
Currently, most human-robot interactions are based on a set of well-ordered and structured
rules, which repeat regardless of the person or social situation. This trend tends to provide an unrealistic
interaction, which can make difficult for humans to relate โnaturallyโ with the social robot after a number
of relations. The main focus of these interactions is that the social robot shows a very structured set of
behaviours and, as such, acts unnaturally and mechanical in terms of social interactions. On the other
hand, fallible behaviours (e.g. forgetfulness, inability to understand otherโ emotions, bragging, blaming
others) are common behaviours in humans and can be seen in regular social interactions. Some of these
fallible behaviours are caused by the various cognitive biases. Researchers studied and developed
various humanlike skills (e.g. personality, emotions expressions, traits) in social robots to make their
behaviours more humanlike, and as a result, social robots can perform various humanlike actions, such
as walking, talking, gazing or emotional expression. But common human behaviours such as
forgetfulness, inability to understand other emotions, bragging or blaming are not present in the current
social robots; such behaviours which exist and influence people have not been explored in social robots.
The study presented in this thesis developed five cognitive biases in three different robots in
four separate experiments to understand the influences of such cognitive biases in humanโrobot
interactions. The results show that participants initially liked to interact with the robot with cognitive
biased behaviours more than the robot without such behaviours. In my first two experiments, the robots
(e.g., ERWIN, MyKeepon) interacted with the participants using a single bias (i.e., misattribution and
empathy gap) cognitive biases accordingly, and participants enjoyed the interactions using such bias
effects: for example, forgetfulness, source confusions, always showing exaggerated happiness or
sadness and so on in the robots. In my later experiments, participants interacted with the robot (e.g.,
MARC) three times, with a time interval between two interactions, and results show that the likeness
the interactions where the robot shows biased behaviours decreases less than the interactions where the
robot did not show any biased behaviours.
In the current thesis, I describe the investigations of these traits of forgetfulness, the inability
to understand othersโ emotions, and bragging and blaming behaviours, which are influenced by
cognitive biases, and I also analyse peopleโs responses to robots displaying such biased behaviours in
humanโrobot interactions
Service Failure Management in High-End Hospitality Resorts
The purpose of this study was to better understand the interactions that occur at high-end resorts during service failures that guests sometimes experience during their stay. Both service-failure managers and guests who had experienced service failures during their stay at a high-end resort were interviewed to examine the service recovery techniques and timing strategies (such as the ex-antecrisis crisis timing strategy) that are perceived to be the best methods to correct service failures during a guestโs stay. In comparing the responses from service recovery managers and guests, commonalities were found regarding the best practices for ways to treat guests during service failures, and the pre-existing understandings that service recovery mangers have about techniques that have effectively resolved service failures in the past
- โฆ