15 research outputs found

    ๋กœ๋ด‡์˜ ๊ณ ๊ฐœ๋ฅผ ์›€์ง์ด๋Š” ๋™์ž‘๊ณผ ํƒ€์ด๋ฐ์ด ์ธ๊ฐ„๊ณผ ๋กœ๋ด‡์˜ ์ƒํ˜ธ์ž‘์šฉ์— ๋ฏธ์น˜๋Š” ํšจ๊ณผ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(์„์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ์ธ๋ฌธ๋Œ€ํ•™ ํ˜‘๋™๊ณผ์ • ์ธ์ง€๊ณผํ•™์ „๊ณต, 2023. 2. Sowon Hahn.In recent years, robots with artificial intelligence capabilities have become ubiquitous in our daily lives. As intelligent robots are interacting closely with humans, social abilities of robots are increasingly more important. In particular, nonverbal communication can enhance the efficient social interaction between human users and robots, but there are limitations of behavior expression. In this study, we investigated how minimal head movements of the robot influence human-robot interaction. We newly designed a robot which has a simple shaped body and minimal head movement mechanism. We conducted an experiment to examine participants' perception of robots different head movements and timing. Participants were randomly assigned to one of three movement conditions, head nodding (A), head shaking (B) and head tilting (C). Each movement condition included two timing variables, prior head movement of utterance and simultaneous head movement with utterance. For all head movement conditions, participants' perception of anthropomorphism, animacy, likeability and intelligence were higher compared to non-movement (utterance only) condition. In terms of timing, when the robot performed head movement prior to utterance, perceived naturalness was rated higher than simultaneous head movement with utterance. The findings demonstrated that head movements of the robot positively affects user perception of the robot, and head movement prior to utterance can make human-robot conversation more natural. By implementation of head movement and movement timing, simple shaped robots can have better social interaction with humans.์ตœ๊ทผ ์ธ๊ณต์ง€๋Šฅ ๋กœ๋ด‡์€ ์ผ์ƒ์—์„œ ํ”ํ•˜๊ฒŒ ์ ‘ํ•  ์ˆ˜ ์žˆ๋Š” ๊ฒƒ์ด ๋˜์—ˆ๋‹ค. ์ธ๊ฐ„๊ณผ์˜ ๊ต๋ฅ˜๊ฐ€ ๋Š˜์–ด๋‚จ์— ๋”ฐ๋ผ ๋กœ๋ด‡์˜ ์‚ฌํšŒ์  ๋Šฅ๋ ฅ์€ ๋” ์ค‘์š”ํ•ด์ง€๊ณ  ์žˆ๋‹ค. ์ธ๊ฐ„๊ณผ ๋กœ๋ด‡์˜ ์‚ฌํšŒ์  ์ƒํ˜ธ์ž‘์šฉ์€ ๋น„์–ธ์–ด์  ์ปค๋ฎค๋‹ˆ์ผ€์ด์…˜์„ ํ†ตํ•ด ๊ฐ•ํ™”๋  ์ˆ˜ ์žˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋กœ๋ด‡์€ ๋น„์–ธ์–ด์  ์ œ์Šค์ฒ˜์˜ ํ‘œํ˜„์— ์ œ์•ฝ์„ ๊ฐ–๋Š”๋‹ค. ๋˜ํ•œ ๋กœ๋ด‡์˜ ์‘๋‹ต ์ง€์—ฐ ๋ฌธ์ œ๋Š” ์ธ๊ฐ„์ด ๋ถˆํŽธํ•œ ์นจ๋ฌต์˜ ์ˆœ๊ฐ„์„ ๊ฒฝํ—˜ํ•˜๊ฒŒ ํ•œ๋‹ค. ๋ณธ ์—ฐ๊ตฌ๋ฅผ ํ†ตํ•ด ๋กœ๋ด‡์˜ ๊ณ ๊ฐœ ์›€์ง์ž„์ด ์ธ๊ฐ„๊ณผ ๋กœ๋ด‡์˜ ์ƒํ˜ธ์ž‘์šฉ์— ์–ด๋–ค ์˜ํ–ฅ์„ ๋ฏธ์น˜๋Š”์ง€ ์•Œ์•„๋ณด์•˜๋‹ค. ๋กœ๋ด‡์˜ ๊ณ ๊ฐœ ์›€์ง์ž„์„ ํƒ๊ตฌํ•˜๊ธฐ ์œ„ํ•ด ๋‹จ์ˆœํ•œ ํ˜•์ƒ๊ณผ ๊ณ ๊ฐœ๋ฅผ ์›€์ง์ด๋Š” ๊ตฌ์กฐ๋ฅผ ๊ฐ€์ง„ ๋กœ๋ด‡์„ ์ƒˆ๋กญ๊ฒŒ ๋””์ž์ธํ•˜์˜€๋‹ค. ์ด ๋กœ๋ด‡์„ ํ™œ์šฉํ•˜์—ฌ ๋กœ๋ด‡์˜ ๋จธ๋ฆฌ ์›€์ง์ž„๊ณผ ํƒ€์ด๋ฐ์ด ์ฐธ์—ฌ์ž์—๊ฒŒ ์–ด๋–ป๊ฒŒ ์ง€๊ฐ๋˜๋Š”์ง€ ์‹คํ—˜ํ•˜์˜€๋‹ค. ์ฐธ์—ฌ์ž๋“ค์€ 3๊ฐ€์ง€ ์›€์ง์ž„ ์กฐ๊ฑด์ธ, ๋„๋•์ž„ (A), ์ขŒ์šฐ๋กœ ์ €์Œ (B), ๊ธฐ์šธ์ž„ (C) ์ค‘ ํ•œ ๊ฐ€์ง€ ์กฐ๊ฑด์— ๋ฌด์ž‘์œ„๋กœ ์„ ์ •๋˜์—ˆ๋‹ค. ๊ฐ๊ฐ์˜ ๊ณ ๊ฐœ ์›€์ง์ž„ ์กฐ๊ฑด์€ ๋‘ ๊ฐ€์ง€ ํƒ€์ด๋ฐ(์Œ์„ฑ๋ณด๋‹ค ์•ž์„  ๊ณ ๊ฐœ ์›€์ง์ž„, ์Œ์„ฑ๊ณผ ๋™์‹œ์— ์ผ์–ด๋‚˜๋Š” ๊ณ ๊ฐœ ์›€์ง์ž„)์˜ ๋ณ€์ˆ˜๋ฅผ ๊ฐ–๋Š”๋‹ค. ๋ชจ๋“  ํƒ€์ž…์˜ ๊ณ ๊ฐœ ์›€์ง์ž„์—์„œ ์›€์ง์ž„์ด ์—†๋Š” ์กฐ๊ฑด๊ณผ ๋น„๊ตํ•˜์—ฌ ๋กœ๋ด‡์˜ ์ธ๊ฒฉํ™”, ํ™œ๋™์„ฑ, ํ˜ธ๊ฐ๋„, ๊ฐ์ง€๋œ ์ง€๋Šฅ์ด ํ–ฅ์ƒ๋œ ๊ฒƒ์„ ๊ด€์ฐฐํ•˜์˜€๋‹ค. ํƒ€์ด๋ฐ์€ ๋กœ๋ด‡์˜ ์Œ์„ฑ๋ณด๋‹ค ๊ณ ๊ฐœ ์›€์ง์ž„์ด ์•ž์„ค ๋•Œ ์ž์—ฐ์Šค๋Ÿฌ์›€์ด ๋†’๊ฒŒ ์ง€๊ฐ๋˜๋Š” ๊ฒƒ์œผ๋กœ ๊ด€์ฐฐ๋˜์—ˆ๋‹ค. ๊ฒฐ๊ณผ์ ์œผ๋กœ, ๋กœ๋ด‡์˜ ๊ณ ๊ฐœ ์›€์ง์ž„์€ ์‚ฌ์šฉ์ž์˜ ์ง€๊ฐ์— ๊ธ์ •์ ์ธ ์˜ํ–ฅ์„ ์ฃผ๋ฉฐ, ์•ž์„  ํƒ€์ด๋ฐ์˜ ๊ณ ๊ฐœ ์›€์ง์ž„์ด ์ž์—ฐ์Šค๋Ÿฌ์›€์„ ํ–ฅ์ƒ์‹œํ‚ค๋Š” ๊ฒƒ์„ ํ™•์ธํ•˜์˜€๋‹ค. ๊ณ ๊ฐœ๋ฅผ ์›€์ง์ด๋Š” ๋™์ž‘๊ณผ ํƒ€์ด๋ฐ์„ ํ†ตํ•ด ๋‹จ์ˆœํ•œ ํ˜•์ƒ์˜ ๋กœ๋ด‡๊ณผ ์ธ๊ฐ„์˜ ์ƒํ˜ธ์ž‘์šฉ์ด ํ–ฅ์ƒ๋  ์ˆ˜ ์žˆ์Œ์„ ๋ณธ ์—ฐ๊ตฌ๋ฅผ ํ†ตํ•ด ํ™•์ธํ•˜์˜€๋‹ค.Chapter 1. Introduction 1 1.1. Motivation 1 1.2. Literature Review and Hypotheses 3 1.3. Purpose of Study 11 Chapter 2. Experiment 13 2.1. Methods 13 2.2. Results 22 2.3. Discussion 33 Chapter 3. Conclusion 35 Chapter 4. General Discussion 37 4.1. Theoretical Implications 37 4.2. Practical Implications 38 4.3. Limitations and Future work 39 References 41 Appendix 53 Abstract in Korean 55์„

    How Transparency Measures Can Attenuate Initial Failures of Intelligent Decision Support Systems

    Get PDF
    Owing to high functional complexity, trust plays a critical role for the adoption of intelligent decision support systems (DSS). Especially failures in initial usage phases might endanger trust since users are yet to assess the systemโ€™s capabilities over time. Since such initial failures are unavoidable, it is crucial to understand how providers can inform users about system capabilities to rebuild user trust. Using an online experiment, we evaluate the effects of recurring explanations and initial tutorials as transparency measures on trust. We find that recurring explanations are superior to initial tutorials in establishing trust in intelligent DSS. However, recurring explanations are only as effective as tutorials or the combination of both tutorials and recurring explanations in rebuilding trust after initial failures occurred. Our results provide empirical insights for the design of transparency mechanisms for intelligent DSS, especially those with high underlying algorithmic complexity or potentially high damage

    Having The Right Attitude: How Attitude Impacts Trust Repair in Human-Robot Interaction,

    Full text link
    Robot co-workers, like human co-workers, make mistakes that undermine trust. Yet, trust is just as important in promoting humanโ€“robot collaboration as it is in promoting humanโ€“human collaboration. In addition, individuals can significantly differ in their attitudes toward robots, which can also impact or hinder their trust in robots. To better understand how individual attitude can influence trust repair strategies, we propose a theoretical model that draws from the theory of cognitive dissonance. To empirically verify this model, we conducted a between-subjects experiment with 100 participants assigned to one of four repair strategies (apologies, denials, explanations, or promises) over three trust violations. Individual attitudes did moderate the efficacy of repair strategies and this effect differed over successive trust violations. Specifically, repair strategies were most effective relative to individual attitude during the second of the three trust violations, and promises were the trust repair strategy most impacted by an individualโ€™s attitude.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/171268/1/Esterwood and Roboert 2022 HRI.pdfDescription of Esterwood and Roboert 2022 HRI.pdf : PreprintSEL

    From Artificial Intelligence (AI) to Intelligence Augmentation (IA): Design Principles, Potential Risks, and Emerging Issues

    Get PDF
    We typically think of artificial intelligence (AI) as focusing on empowering machines with human capabilities so that they can function on their own, but, in truth, much of AI focuses on intelligence augmentation (IA), which is to augment human capabilities. We propose a framework for designing intelligent augmentation (IA) systems and it addresses six central questions about IA: why, what, who/whom, how, when, and where. To address the how aspect, we introduce four guiding principles: simplification, interpretability, human-centeredness, and ethics. The what aspect includes an IA architecture that goes beyond the direct interactions between humans and machines by introducing their indirect relationships through data and domain. The architecture also points to the directions for operationalizing the IA design simplification principle. We further identify some potential risks and emerging issues in IA design and development to suggest new questions for future IA research and to foster its positive impact on humanity

    Are human-like robots trusted like humans? An investigation into the effect of anthropomorphism on trust in robots measured by expected value as reflected by feedback related negativity and P300

    Get PDF
    Robots are becoming more prevalently used in industry and society. However, in order to ensure effective use of the trust, must be calibrated correctly. Anthropomorphism is one factors which is important in trust in robots (Hancock et al., 2011). Questionnaires and investment games have been used to investigate the impact of anthropomorphism on trust, however, these methods have led to disparate findings. Neurophysiological methods have also been used as an implicit measure of trust. Feedback related negativity (FRN) and P300 are event related potential (ERP) components which have been associated with processes involved in trust such as outcome evaluation. This study uses the trust game (Berg et al., 1995), along with questionnaires and ERP data to investigate trust and expectations towards three agents varying in anthropomorphism, a human, an anthropomorphic robot, and a computer. The behavioural and self-reported findings suggest that the human is perceived as the most trustworthy and there is no difference between the robot and the computer. The ERP data revealed a robot driven difference in FRN and P300 activation, which suggests that robots violated expectations more so than a human or a computer. The present findings are explained in terms of the perfect automation schema and trustworthiness and dominance perceptions. Future research into the impact of voice pitch on dominance and trustworthiness and the impact of trust violations is suggested in order to gain a more holistic picture of the impact of anthropomorphism on trust

    A Broad View on Robot Self-Defense: Rapid Scoping Review and Cultural Comparison

    Get PDF
    With power comes responsibility: as robots become more advanced and prevalent, the role they will play in human society becomes increasingly important. Given that violence is an important problem, the question emerges if robots could defend people, even if doing so might cause harm to someone. The current study explores the broad context of how people perceive the acceptability of such robot self-defense (RSD) in terms of (1) theory, via a rapid scoping review, and (2) public opinion in two countries. As a result, we summarize and discuss: increasing usage of robots capable of wielding force by law enforcement and military, negativity toward robots, ethics and legal questions (including differences to the well-known trolley problem), control in the presence of potential failures, and practical capabilities that such robots might require. Furthermore, a survey was conducted, indicating that participants accepted the idea of RSD, with some cultural differences. We believe that, while substantial obstacles will need to be overcome to realize RSD, society stands to gain from exploring its possibilities over the longer term, toward supporting human well-being in difficult times
    corecore