3,148 research outputs found

    Persuasiveness of social robot โ€˜Naoโ€™ based on gaze and proximity

    Get PDF
    Social Robots have widely infiltrated the retail and public space. Mainly, social robots are being utilized across a wide range of scenarios to influence decision making, disseminate information, and act as a signage mechanism, under the umbrella of Persuasive Robots or Persuasive Technology. While there have been several studies in the afore-mentioned area, the effect of non-verbal behaviour on persuasive abilities is generally unexplored. Therefore, in this research, we report whether two key non-verbal attributes, namely proximity and gaze, can elicit persuasively, compliance, and specific personality appeals. For this, we conducted a 2 (eye gaze) x 2 (proximity) between-subjects experiment where participants viewed a video-based scenario of the Nao robot. Our initial results did not reveal any significant results based on the non-verbal attributes. However, perceived compliance and persuasion were significantly correlated with knowledge, responsiveness, and trustworthiness. In conclusion, we discuss how the design of a robot could make it more convincing as extensive marketing and brand promotion companies could use robots to enhance their advertisement operations

    A Comparison of Visualisation Methods for Disambiguating Verbal Requests in Human-Robot Interaction

    Full text link
    Picking up objects requested by a human user is a common task in human-robot interaction. When multiple objects match the user's verbal description, the robot needs to clarify which object the user is referring to before executing the action. Previous research has focused on perceiving user's multimodal behaviour to complement verbal commands or minimising the number of follow up questions to reduce task time. In this paper, we propose a system for reference disambiguation based on visualisation and compare three methods to disambiguate natural language instructions. In a controlled experiment with a YuMi robot, we investigated real-time augmentations of the workspace in three conditions -- mixed reality, augmented reality, and a monitor as the baseline -- using objective measures such as time and accuracy, and subjective measures like engagement, immersion, and display interference. Significant differences were found in accuracy and engagement between the conditions, but no differences were found in task time. Despite the higher error rates in the mixed reality condition, participants found that modality more engaging than the other two, but overall showed preference for the augmented reality condition over the monitor and mixed reality conditions

    Final Report for the DARPA/NSF Interdisciplinary Study on Humanโ€“Robot Interaction

    Get PDF
    As part of a Defense Advanced Research Projects Agency/National Science Foundation study on humanโ€“robot interaction (HRI), over sixty representatives from academia, government, and industry participated in an interdisciplinary workshop, which allowed roboticists to interact with psychologists, sociologists, cognitive scientists, communication experts and humanโ€“computer interaction specialists to discuss common interests in the field of HRI, and to establish a dialogue across the disciplines for future collaborations. We include initial work that was done in preparation for the workshop, links to keynote and other presentations, and a summary of the findings, outcomes, and recommendations that were generated by the participants. Findings of the study includeโ€” the need for more extensive interdisciplinary interaction, identification of basic taxonomies and research issues, social informatics, establishment of a small number of common application domains, and field experience for members of the HRI community. An overall conclusion of the workshop was expressed as the followingโ€” HRI is a cross-disciplinary area, which poses barriers to meaningful research, synthesis, and technology transfer. The vocabularies, experiences, methodologies, and metrics of the communities are sufficiently different that cross-disciplinary research is unlikely to happen without sustained funding and an infrastructure to establish a new HRI community

    Challenges in Developing Applications for Aging Populations

    Get PDF
    Elderly individuals can greatly benefit from the use of computer applications, which can assist in monitoring health conditions, staying in contact with friends and family, and even learning new things. However, developing accessible applications for an elderly user can be a daunting task for developers. Since the advent of the personal computer, the benefits and challenges of developing applications for older adults have been a hot topic of discussion. In this chapter, the authors discuss the various challenges developers who wish to create applications for the elderly computer user face, including age-related impairments, generational differences in computer use, and the hardware constraints mobile devices pose for application developers. Although these challenges are concerning, each can be overcome after being properly identified

    How to Interact with Augmented Reality Head Mounted Devices in Care Work? A Study Comparing Handheld Touch (Hands-on) and Gesture (Hands-free) Interaction

    Get PDF
    In this paper, we investigate augmented reality (AR) to support caregivers. We implemented a system called Care Lenses that supported various care tasks on AR head-mounted devices. For its application, one question concerned how caregivers could interact with the system while providing care (i.e., while using one or both hands for care tasks). Therefore, we compared two mechanisms to interact with the Care Lenses (handheld touch similar to touchpads and touchscreens and head gestures). We found that head gestures were difficult to apply in practice, but except for that the head gesture support was as usable and useful as handheld touch interaction, although the study participants were much more familiar with the handheld touch control. We conclude that head gestures can be a good means to enable AR support in care, and we provide design considerations to make them more applicable in practice

    ๋กœ๋ด‡์˜ ๊ณ ๊ฐœ๋ฅผ ์›€์ง์ด๋Š” ๋™์ž‘๊ณผ ํƒ€์ด๋ฐ์ด ์ธ๊ฐ„๊ณผ ๋กœ๋ด‡์˜ ์ƒํ˜ธ์ž‘์šฉ์— ๋ฏธ์น˜๋Š” ํšจ๊ณผ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(์„์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ์ธ๋ฌธ๋Œ€ํ•™ ํ˜‘๋™๊ณผ์ • ์ธ์ง€๊ณผํ•™์ „๊ณต, 2023. 2. Sowon Hahn.In recent years, robots with artificial intelligence capabilities have become ubiquitous in our daily lives. As intelligent robots are interacting closely with humans, social abilities of robots are increasingly more important. In particular, nonverbal communication can enhance the efficient social interaction between human users and robots, but there are limitations of behavior expression. In this study, we investigated how minimal head movements of the robot influence human-robot interaction. We newly designed a robot which has a simple shaped body and minimal head movement mechanism. We conducted an experiment to examine participants' perception of robots different head movements and timing. Participants were randomly assigned to one of three movement conditions, head nodding (A), head shaking (B) and head tilting (C). Each movement condition included two timing variables, prior head movement of utterance and simultaneous head movement with utterance. For all head movement conditions, participants' perception of anthropomorphism, animacy, likeability and intelligence were higher compared to non-movement (utterance only) condition. In terms of timing, when the robot performed head movement prior to utterance, perceived naturalness was rated higher than simultaneous head movement with utterance. The findings demonstrated that head movements of the robot positively affects user perception of the robot, and head movement prior to utterance can make human-robot conversation more natural. By implementation of head movement and movement timing, simple shaped robots can have better social interaction with humans.์ตœ๊ทผ ์ธ๊ณต์ง€๋Šฅ ๋กœ๋ด‡์€ ์ผ์ƒ์—์„œ ํ”ํ•˜๊ฒŒ ์ ‘ํ•  ์ˆ˜ ์žˆ๋Š” ๊ฒƒ์ด ๋˜์—ˆ๋‹ค. ์ธ๊ฐ„๊ณผ์˜ ๊ต๋ฅ˜๊ฐ€ ๋Š˜์–ด๋‚จ์— ๋”ฐ๋ผ ๋กœ๋ด‡์˜ ์‚ฌํšŒ์  ๋Šฅ๋ ฅ์€ ๋” ์ค‘์š”ํ•ด์ง€๊ณ  ์žˆ๋‹ค. ์ธ๊ฐ„๊ณผ ๋กœ๋ด‡์˜ ์‚ฌํšŒ์  ์ƒํ˜ธ์ž‘์šฉ์€ ๋น„์–ธ์–ด์  ์ปค๋ฎค๋‹ˆ์ผ€์ด์…˜์„ ํ†ตํ•ด ๊ฐ•ํ™”๋  ์ˆ˜ ์žˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋กœ๋ด‡์€ ๋น„์–ธ์–ด์  ์ œ์Šค์ฒ˜์˜ ํ‘œํ˜„์— ์ œ์•ฝ์„ ๊ฐ–๋Š”๋‹ค. ๋˜ํ•œ ๋กœ๋ด‡์˜ ์‘๋‹ต ์ง€์—ฐ ๋ฌธ์ œ๋Š” ์ธ๊ฐ„์ด ๋ถˆํŽธํ•œ ์นจ๋ฌต์˜ ์ˆœ๊ฐ„์„ ๊ฒฝํ—˜ํ•˜๊ฒŒ ํ•œ๋‹ค. ๋ณธ ์—ฐ๊ตฌ๋ฅผ ํ†ตํ•ด ๋กœ๋ด‡์˜ ๊ณ ๊ฐœ ์›€์ง์ž„์ด ์ธ๊ฐ„๊ณผ ๋กœ๋ด‡์˜ ์ƒํ˜ธ์ž‘์šฉ์— ์–ด๋–ค ์˜ํ–ฅ์„ ๋ฏธ์น˜๋Š”์ง€ ์•Œ์•„๋ณด์•˜๋‹ค. ๋กœ๋ด‡์˜ ๊ณ ๊ฐœ ์›€์ง์ž„์„ ํƒ๊ตฌํ•˜๊ธฐ ์œ„ํ•ด ๋‹จ์ˆœํ•œ ํ˜•์ƒ๊ณผ ๊ณ ๊ฐœ๋ฅผ ์›€์ง์ด๋Š” ๊ตฌ์กฐ๋ฅผ ๊ฐ€์ง„ ๋กœ๋ด‡์„ ์ƒˆ๋กญ๊ฒŒ ๋””์ž์ธํ•˜์˜€๋‹ค. ์ด ๋กœ๋ด‡์„ ํ™œ์šฉํ•˜์—ฌ ๋กœ๋ด‡์˜ ๋จธ๋ฆฌ ์›€์ง์ž„๊ณผ ํƒ€์ด๋ฐ์ด ์ฐธ์—ฌ์ž์—๊ฒŒ ์–ด๋–ป๊ฒŒ ์ง€๊ฐ๋˜๋Š”์ง€ ์‹คํ—˜ํ•˜์˜€๋‹ค. ์ฐธ์—ฌ์ž๋“ค์€ 3๊ฐ€์ง€ ์›€์ง์ž„ ์กฐ๊ฑด์ธ, ๋„๋•์ž„ (A), ์ขŒ์šฐ๋กœ ์ €์Œ (B), ๊ธฐ์šธ์ž„ (C) ์ค‘ ํ•œ ๊ฐ€์ง€ ์กฐ๊ฑด์— ๋ฌด์ž‘์œ„๋กœ ์„ ์ •๋˜์—ˆ๋‹ค. ๊ฐ๊ฐ์˜ ๊ณ ๊ฐœ ์›€์ง์ž„ ์กฐ๊ฑด์€ ๋‘ ๊ฐ€์ง€ ํƒ€์ด๋ฐ(์Œ์„ฑ๋ณด๋‹ค ์•ž์„  ๊ณ ๊ฐœ ์›€์ง์ž„, ์Œ์„ฑ๊ณผ ๋™์‹œ์— ์ผ์–ด๋‚˜๋Š” ๊ณ ๊ฐœ ์›€์ง์ž„)์˜ ๋ณ€์ˆ˜๋ฅผ ๊ฐ–๋Š”๋‹ค. ๋ชจ๋“  ํƒ€์ž…์˜ ๊ณ ๊ฐœ ์›€์ง์ž„์—์„œ ์›€์ง์ž„์ด ์—†๋Š” ์กฐ๊ฑด๊ณผ ๋น„๊ตํ•˜์—ฌ ๋กœ๋ด‡์˜ ์ธ๊ฒฉํ™”, ํ™œ๋™์„ฑ, ํ˜ธ๊ฐ๋„, ๊ฐ์ง€๋œ ์ง€๋Šฅ์ด ํ–ฅ์ƒ๋œ ๊ฒƒ์„ ๊ด€์ฐฐํ•˜์˜€๋‹ค. ํƒ€์ด๋ฐ์€ ๋กœ๋ด‡์˜ ์Œ์„ฑ๋ณด๋‹ค ๊ณ ๊ฐœ ์›€์ง์ž„์ด ์•ž์„ค ๋•Œ ์ž์—ฐ์Šค๋Ÿฌ์›€์ด ๋†’๊ฒŒ ์ง€๊ฐ๋˜๋Š” ๊ฒƒ์œผ๋กœ ๊ด€์ฐฐ๋˜์—ˆ๋‹ค. ๊ฒฐ๊ณผ์ ์œผ๋กœ, ๋กœ๋ด‡์˜ ๊ณ ๊ฐœ ์›€์ง์ž„์€ ์‚ฌ์šฉ์ž์˜ ์ง€๊ฐ์— ๊ธ์ •์ ์ธ ์˜ํ–ฅ์„ ์ฃผ๋ฉฐ, ์•ž์„  ํƒ€์ด๋ฐ์˜ ๊ณ ๊ฐœ ์›€์ง์ž„์ด ์ž์—ฐ์Šค๋Ÿฌ์›€์„ ํ–ฅ์ƒ์‹œํ‚ค๋Š” ๊ฒƒ์„ ํ™•์ธํ•˜์˜€๋‹ค. ๊ณ ๊ฐœ๋ฅผ ์›€์ง์ด๋Š” ๋™์ž‘๊ณผ ํƒ€์ด๋ฐ์„ ํ†ตํ•ด ๋‹จ์ˆœํ•œ ํ˜•์ƒ์˜ ๋กœ๋ด‡๊ณผ ์ธ๊ฐ„์˜ ์ƒํ˜ธ์ž‘์šฉ์ด ํ–ฅ์ƒ๋  ์ˆ˜ ์žˆ์Œ์„ ๋ณธ ์—ฐ๊ตฌ๋ฅผ ํ†ตํ•ด ํ™•์ธํ•˜์˜€๋‹ค.Chapter 1. Introduction 1 1.1. Motivation 1 1.2. Literature Review and Hypotheses 3 1.3. Purpose of Study 11 Chapter 2. Experiment 13 2.1. Methods 13 2.2. Results 22 2.3. Discussion 33 Chapter 3. Conclusion 35 Chapter 4. General Discussion 37 4.1. Theoretical Implications 37 4.2. Practical Implications 38 4.3. Limitations and Future work 39 References 41 Appendix 53 Abstract in Korean 55์„

    Facing with Collaborative Robots : The Subjective Experience in Senior and Younger Workers

    Get PDF
    In the past few years, collaborative robots (i.e., cobots) have been largely adopted within industrial manufacturing. Although robots can support companies and workers in carrying out complex activities and improving productivity, human factors related to cobot operators have not yet been thoroughly investigated. The present study aims to understand the subjective experience of younger and senior workers interacting with an industrial collaborative robot. Results show that workers' acceptance of cobots is high, regardless of age and control modality used. Interesting differences between seniors and younger adults emerged in the evaluations of user experience, usability, and perceived workload of participants and are detailed and commented in the last part of the work.Peer reviewe

    Emotion Transfer from Frontline Social Robots to Human Customers During Service Encounters: Testing an Artificial Emotional Contagion Modell

    Get PDF
    This research examines mood transitions during human-robot interactions (HRI) compared with human-human interactions (HHI) during service encounters. Based on emotional contagion and social identity theory, we argue that emotion transmission within HRI (e.g., between a frontline service robot and a human customer) may occur through the imitation of the robotโ€™s verbal and bodily expressions by the customer and may be stronger for negative than for positive emotions. The customerโ€™s positive attitude and anxiety toward robots will further be examined as contingencies that strengthen or weaken the emotion transition during the HRI. We already identified the five most important emotions during service encounters (critical incident study with 131 frontline employees). The subsequent output behavior was programmed to a Nao robot and validated (ratings from 234 students). In the next step, we attempt to manipulate the emotional expressions of a frontline social robot and a customer within an experimental study
    • โ€ฆ
    corecore