6,953 research outputs found

    λ‘œλ΄‡μ˜ 신체 μ–Έμ–΄κ°€ μ‚¬νšŒμ  νŠΉμ„±κ³Ό 인간 μœ μ‚¬μ„±μ— λ―ΈμΉ˜λŠ” 영ν–₯

    Get PDF
    ν•™μœ„λ…Όλ¬Έ (석사) -- μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› : μ‚¬νšŒκ³Όν•™λŒ€ν•™ 심리학과, 2021. 2. Sowon Hahn.The present study investigated the role of robots’ body language on perceptions of social qualities and human-likeness in robots. In experiment 1, videos of a robot’s body language varying in expansiveness were used to evaluate the two aspects. In experiment 2, videos of social interactions containing the body languages in experiment 1 were used to further examine the effects of robots’ body language on these aspects. Results suggest that a robot conveying open body language are evaluated higher on perceptions of social characteristics and human-likeness compared to a robot with closed body language. These effects were not found in videos of social interactions (experiment 2), which suggests that other features play significant roles in evaluations of a robot. Nonetheless, current research provides evidence of the importance of robots’ body language in judgments of social characteristics and human-likeness. While measures of social qualities and human-likeness favor robots that convey open body language, post-experiment interviews revealed that participants expect robots to alleviate feelings of loneliness and empathize with them, which require more diverse body language in addition to open body language. Thus, robotic designers are encouraged to develop robots capable of expressing a wider range of motion. By enabling complex movements, more natural communications between humans and robots are possible, which allows humans to consider robots as social partners.λ³Έ μ—°κ΅¬λŠ” λ‘œλ΄‡μ˜ 신체 μ–Έμ–΄κ°€ μ‚¬νšŒμ  νŠΉμ„±κ³Ό μΈκ°„κ³Όμ˜ μœ μ‚¬μ„±μ— λŒ€ν•œ μΈκ°„μ˜ 인식에 λ―ΈμΉ˜λŠ” 영ν–₯을 νƒμƒ‰ν•˜μ˜€λ‹€. μ‹€ν—˜ 1μ—μ„œλŠ” λ‘œλ΄‡μ˜ 개방적 신체 μ–Έμ–΄κ°€ λ¬˜μ‚¬λœ μ˜μƒκ³Ό 폐쇄적 신체 μ–Έμ–΄κ°€ λ¬˜μ‚¬λœ μ˜μƒμ„ 톡해 μ΄λŸ¬ν•œ μ„Έ 가지 츑면을 μ‚΄νŽ΄λ³΄μ•˜λ‹€. μ‹€ν—˜ 2μ—μ„œλŠ” μ‹€ν—˜ 1의 신체 μ–Έμ–΄κ°€ ν¬ν•¨λœ λ‘œλ΄‡κ³Ό μ‚¬λžŒ κ°„μ˜ μƒν˜Έμž‘μš© μ˜μƒμ„ ν™œμš©ν•˜μ—¬ λ‘œλ΄‡μ˜ 신체 μ–Έμ–΄κ°€ μœ„ 두 가지 츑면에 λ―ΈμΉ˜λŠ” 영ν–₯을 νƒμƒ‰ν•˜μ˜€λ‹€. 결과적으둜, μ‚¬λžŒλ“€μ€ 폐쇄적 신체 μ–Έμ–΄λ₯Ό ν‘œν˜„ν•˜λŠ” λ‘œλ΄‡μ— λΉ„ν•΄ 개방적 신체 μ–Έμ–΄λ₯Ό ν‘œν˜„ν•˜λŠ” λ‘œλ΄‡μ„ μ‚¬νšŒμ  νŠΉμ„±κ³Ό μΈκ°„κ³Όμ˜ μœ μ‚¬μ„±μ— λŒ€ν•œ 인식 λ©΄μ—μ„œ 더 λ†’κ²Œ ν‰κ°€ν•œλ‹€λŠ” 것을 ν™•μΈν•˜μ˜€λ‹€. κ·ΈλŸ¬λ‚˜ μ‚¬λžŒκ³Όμ˜ μƒν˜Έμž‘μš©μ„ 담은 μ˜μƒμ„ ν†΅ν•΄μ„œλŠ” μ΄λŸ¬ν•œ νš¨κ³Όκ°€ λ°œκ²¬λ˜μ§€ μ•Šμ•˜μœΌλ©°, μ΄λŠ” μ‹€ν—˜ 2에 ν¬ν•¨λœ μŒμ„± λ“±μ˜ λ‹€λ₯Έ νŠΉμ§•μ΄ λ‘œλ΄‡μ— λŒ€ν•œ 평가에 μ€‘μš”ν•œ 역할을 ν•œλ‹€λŠ” 것을 μ‹œμ‚¬ν•œλ‹€. κ·ΈλŸΌμ—λ„ λΆˆκ΅¬ν•˜κ³ , λ³Έ μ—°κ΅¬λŠ” λ‘œλ΄‡μ˜ 신체 μ–Έμ–΄κ°€ μ‚¬νšŒμ  νŠΉμ„± 및 μΈκ°„κ³Όμ˜ μœ μ‚¬μ„±μ— λŒ€ν•œ μΈμ‹μ˜ μ€‘μš”ν•œ μš”μΈμ΄ λœλ‹€λŠ” κ·Όκ±°λ₯Ό μ œκ³΅ν•œλ‹€. μ‚¬νšŒμ  νŠΉμ„±κ³Ό μΈκ°„κ³Όμ˜ μœ μ‚¬μ„±μ˜ μ²™λ„μ—μ„œλŠ” 개방적 신체 μ–Έμ–΄λ₯Ό ν‘œν˜„ν•˜λŠ” λ‘œλ΄‡μ΄ 더 λ†’κ²Œ ν‰κ°€λ˜μ—ˆμ§€λ§Œ, μ‹€ν—˜ ν›„ μΈν„°λ·°μ—μ„œλŠ” λ‘œλ΄‡μ΄ μ™Έλ‘œμš΄ 감정을 μ™„ν™”ν•˜κ³  κ³΅κ°ν•˜κΈ°λ₯Ό κΈ°λŒ€ν•˜λŠ” κ²ƒμœΌλ‘œ λ‚˜νƒ€λ‚˜ 이 상황듀에 μ μ ˆν•œ 폐쇄적 신체 μ–Έμ–΄ λ˜ν•œ λ°°μ œν•  수 μ—†λ‹€κ³  해석할 수 μžˆλ‹€. 이에 따라 λ³Έ μ—°κ΅¬μ—μ„œλŠ” λ‘œλ΄‡ λ””μžμ΄λ„ˆλ“€μ΄ λ”μš± λ‹€μ–‘ν•œ λ²”μœ„μ˜ μ›€μ§μž„μ„ ν‘œν˜„ν•  수 μžˆλŠ” λ‘œλ΄‡μ„ κ°œλ°œν•˜λ„λ‘ μž₯λ €ν•œλ‹€. κ·Έλ ‡λ‹€λ©΄ μ„¬μ„Έν•œ μ›€μ§μž„μ— λ”°λ₯Έ μžμ—°μŠ€λŸ¬μš΄ μ˜μ‚¬μ†Œν†΅μ„ 톡해 인간이 λ‘œλ΄‡μ„ μ‚¬νšŒμ  λ™λ°˜μžλ‘œ 인식할 수 μžˆμ„ 것이닀.Chapter 1. Introduction 1 1. Motivation 1 2. Theoretical Background and Previous Research 3 3. Purpose of Study 12 Chapter 2. Experiment 1 13 1. Objective and Hypotheses 13 2. Methods 13 3. Results 21 4. Discussion 31 Chapter 3. Experiment 2 34 1. Objective and Hypotheses 34 2. Methods 35 3. Results 38 4. Discussion 50 Chapter 4. Conclusion 52 Chapter 5. General Discussion 54 References 60 Appendix 70 ꡭ문초둝 77Maste

    Affect Recognition in Autism: a single case study on integrating a humanoid robot in a standard therapy.

    Get PDF
    Autism Spectrum Disorder (ASD) is a multifaceted developmental disorder that comprises a mixture of social impairments, with deficits in many areas including the theory of mind, imitation, and communication. Moreover, people with autism have difficulty in recognising and understanding emotional expressions. We are currently working on integrating a humanoid robot within the standard clinical treatment offered to children with ASD to support the therapists. In this article, using the A-B-A' single case design, we propose a robot-assisted affect recognition training and to present the results on the child’s progress during the five months of clinical experimentation. In the investigation, we tested the generalization of learning and the long-term maintenance of new skills via the NEPSY-II affection recognition sub-test. The results of this single case study suggest the feasibility and effectiveness of using a humanoid robot to assist with emotion recognition training in children with ASD

    Acceptability of the transitional wearable companion β€œ+me” in typical children: a pilot study

    Get PDF
    This work presents the results of the first experimentation of +me-the first prototype of Transitional Wearable Companion–run on 15 typically developed (TD) children with ages between 8 and 34 months. +me is an interactive device that looks like a teddy bear that can be worn around the neck, has touch sensors, can emit appealing lights and sounds, and has input-output contingencies that can be regulated with a tablet via Bluetooth. The participants were engaged in social play activities involving both the device and an adult experimenter. +me was designed with the objective of exploiting its intrinsic allure as an attractive toy to stimulate social interactions (e.g., eye contact, turn taking, imitation, social smiles), an aspect potentially helpful in the therapy of Autism Spectrum Disorders (ASD) and other Pervasive Developmental Disorders (PDD). The main purpose of this preliminary study is to evaluate the general acceptability of the toy by TD children, observing the elicited behaviors in preparation for future experiments involving children with ASD and other PDD. First observations, based on video recording and scoring, show that +me stimulates good social engagement in TD children, especially when their age is higher than 24 months

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    Are future psychologists willing to accept and use a humanoid robot in their practice? Italian and English students' perspective.

    Get PDF
    Despite general scepticism from care professionals, social robotics research is providing evidence of successful application in education and rehabilitation in clinical psychology practice. In this article, we investigate the cultural influences of English and Italian psychology students in the perception of usefulness and intention to use a robot as an instrument for future clinical practice and, secondly, the modality of presentation of the robot by comparing oral versus video presentation. To this end, we surveyed 158 Italian and British-English psychology students after an interactive demonstration using a humanoid robot to evaluate the social robot’s acceptance and use. The Italians were positive, while the English were negative towards the perceived usefulness and intention to use the robot in psychological practice in the near future. However, most English and Italian respondents felt they did not have the necessary abilities to make good use of the robot. We concluded that it is necessary to provide psychology students with further knowledge and practical skills regarding social robotics, which could facilitate the adoption and use of this technology in clinical settings

    Design of a Huggable Social Robot with Affective Expressions Using Projected Images

    Get PDF
    We introduce Pepita, a caricatured huggable robot capable of sensing and conveying affective expressions by means of tangible gesture recognition and projected avatars. This study covers the design criteria, implementation and performance evaluation of the different characteristics of the form and function of this robot. The evaluation involves: (1) the exploratory study of the different features of the device, (2) design and performance evaluation of sensors for affective interaction employing touch, and (3) design and implementation of affective feedback using projected avatars. Results showed that the hug detection worked well for the intended application and the affective expressions made with projected avatars were appropriated for this robot. The questionnaires analyzing users’ perception provide us with insights to guide the future designs of similar interfaces

    Developing an Affect-Aware Rear-Projected Robotic Agent

    Get PDF
    Social (or Sociable) robots are designed to interact with people in a natural and interpersonal manner. They are becoming an integrated part of our daily lives and have achieved positive outcomes in several applications such as education, health care, quality of life, entertainment, etc. Despite significant progress towards the development of realistic social robotic agents, a number of problems remain to be solved. First, current social robots either lack enough ability to have deep social interaction with human, or they are very expensive to build and maintain. Second, current social robots have yet to reach the full emotional and social capabilities necessary for rich and robust interaction with human beings. To address these problems, this dissertation presents the development of a low-cost, flexible, affect-aware rear-projected robotic agent (called ExpressionBot), that is designed to support verbal and non-verbal communication between the robot and humans, with the goal of closely modeling the dynamics of natural face-to-face communication. The developed robotic platform uses state-of-the-art character animation technologies to create an animated human face (aka avatar) that is capable of showing facial expressions, realistic eye movement, and accurate visual speech, and then project this avatar onto a face-shaped translucent mask. The mask and the projector are then rigged onto a neck mechanism that can move like a human head. Since an animation is projected onto a mask, the robotic face is highly flexible research tool, mechanically simple, and low-cost to design, build and maintain compared with mechatronic and android faces. The results of our comprehensive Human-Robot Interaction (HRI) studies illustrate the benefits and values of the proposed rear-projected robotic platform over a virtual-agent with the same animation displayed on a 2D computer screen. The results indicate that ExpressionBot is well accepted by users, with some advantages in expressing facial expressions more accurately and perceiving mutual eye gaze contact. To improve social capabilities of the robot and create an expressive and empathic social agent (affect-aware) which is capable of interpreting users\u27 emotional facial expressions, we developed a new Deep Neural Networks (DNN) architecture for Facial Expression Recognition (FER). The proposed DNN was initially trained on seven well-known publicly available databases, and obtained significantly better than, or comparable to, traditional convolutional neural networks or other state-of-the-art methods in both accuracy and learning time. Since the performance of the automated FER system highly depends on its training data, and the eventual goal of the proposed robotic platform is to interact with users in an uncontrolled environment, a database of facial expressions in the wild (called AffectNet) was created by querying emotion-related keywords from different search engines. AffectNet contains more than 1M images with faces and 440,000 manually annotated images with facial expressions, valence, and arousal. Two DNNs were trained on AffectNet to classify the facial expression images and predict the value of valence and arousal. Various evaluation metrics show that our deep neural network approaches trained on AffectNet can perform better than conventional machine learning methods and available off-the-shelf FER systems. We then integrated this automated FER system into spoken dialog of our robotic platform to extend and enrich the capabilities of ExpressionBot beyond spoken dialog and create an affect-aware robotic agent that can measure and infer users\u27 affect and cognition. Three social/interaction aspects (task engagement, being empathic, and likability of the robot) are measured in an experiment with the affect-aware robotic agent. The results indicate that users rated our affect-aware agent as empathic and likable as a robot in which user\u27s affect is recognized by a human (WoZ). In summary, this dissertation presents the development and HRI studies of a perceptive, and expressive, conversational, rear-projected, life-like robotic agent (aka ExpressionBot or Ryan) that models natural face-to-face communication between human and emapthic agent. The results of our in-depth human-robot-interaction studies show that this robotic agent can serve as a model for creating the next generation of empathic social robots

    Would You Trust a (Faulty) Robot? : Effects of Error, Task Type and Personality on Human-Robot Cooperation and Trust

    Get PDF
    How do mistakes made by a robot affect its trustworthiness and acceptance in human-robot collaboration? We investigate how the perception of erroneous robot behavior may influence human interaction choices and the willingness to cooperate with the robot by following a number of its unusual requests. For this purpose, we conducted an experiment in which participants interacted with a home companion robot in one of two experimental conditions: (1) the correct mode or (2) the faulty mode. Our findings reveal that, while significantly affecting subjective perceptions of the robot and assessments of its reliability and trustworthiness, the robot's performance does not seem to substantially influence participants' decisions to (not) comply with its requests. However, our results further suggest that the nature of the task requested by the robot, e.g. whether its effects are revocable as opposed to irrevocable, has a signicant im- pact on participants' willingness to follow its instructions

    Understanding Large-Language Model (LLM)-powered Human-Robot Interaction

    Full text link
    Large-language models (LLMs) hold significant promise in improving human-robot interaction, offering advanced conversational skills and versatility in managing diverse, open-ended user requests in various tasks and domains. Despite the potential to transform human-robot interaction, very little is known about the distinctive design requirements for utilizing LLMs in robots, which may differ from text and voice interaction and vary by task and context. To better understand these requirements, we conducted a user study (n = 32) comparing an LLM-powered social robot against text- and voice-based agents, analyzing task-based requirements in conversational tasks, including choose, generate, execute, and negotiate. Our findings show that LLM-powered robots elevate expectations for sophisticated non-verbal cues and excel in connection-building and deliberation, but fall short in logical communication and may induce anxiety. We provide design implications both for robots integrating LLMs and for fine-tuning LLMs for use with robots.Comment: 10 pages, 4 figures. Callie Y. Kim and Christine P. Lee contributed equally to the work. To be published in Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction (HRI '24), March 11--14, 2024, Boulder, CO, US
    • …
    corecore