6,953 research outputs found
λ‘λ΄μ μ 체 μΈμ΄κ° μ¬νμ νΉμ±κ³Ό μΈκ° μ μ¬μ±μ λ―ΈμΉλ μν₯
νμλ
Όλ¬Έ (μμ¬) -- μμΈλνκ΅ λνμ : μ¬νκ³Όνλν μ¬λ¦¬νκ³Ό, 2021. 2. Sowon Hahn.The present study investigated the role of robotsβ body language on perceptions of social qualities and human-likeness in robots. In experiment 1, videos of a robotβs body language varying in expansiveness were used to evaluate the two aspects. In experiment 2, videos of social interactions containing the body languages in experiment 1 were used to further examine the effects of robotsβ body language on these aspects. Results suggest that a robot conveying open body language are evaluated higher on perceptions of social characteristics and human-likeness compared to a robot with closed body language. These effects were not found in videos of social interactions (experiment 2), which suggests that other features play significant roles in evaluations of a robot. Nonetheless, current research provides evidence of the importance of robotsβ body language in judgments of social characteristics and human-likeness. While measures of social qualities and human-likeness favor robots that convey open body language, post-experiment interviews revealed that participants expect robots to alleviate feelings of loneliness and empathize with them, which require more diverse body language in addition to open body language. Thus, robotic designers are encouraged to develop robots capable of expressing a wider range of motion. By enabling complex movements, more natural communications between humans and robots are possible, which allows humans to consider robots as social partners.λ³Έ μ°κ΅¬λ λ‘λ΄μ μ 체 μΈμ΄κ° μ¬νμ νΉμ±κ³Ό μΈκ°κ³Όμ μ μ¬μ±μ λν μΈκ°μ μΈμμ λ―ΈμΉλ μν₯μ νμνμλ€. μ€ν 1μμλ λ‘λ΄μ κ°λ°©μ μ 체 μΈμ΄κ° λ¬μ¬λ μμκ³Ό νμμ μ 체 μΈμ΄κ° λ¬μ¬λ μμμ ν΅ν΄ μ΄λ¬ν μΈ κ°μ§ μΈ‘λ©΄μ μ΄ν΄λ³΄μλ€. μ€ν 2μμλ μ€ν 1μ μ 체 μΈμ΄κ° ν¬ν¨λ λ‘λ΄κ³Ό μ¬λ κ°μ μνΈμμ© μμμ νμ©νμ¬ λ‘λ΄μ μ 체 μΈμ΄κ° μ λ κ°μ§ μΈ‘λ©΄μ λ―ΈμΉλ μν₯μ νμνμλ€. κ²°κ³Όμ μΌλ‘, μ¬λλ€μ νμμ μ 체 μΈμ΄λ₯Ό νννλ λ‘λ΄μ λΉν΄ κ°λ°©μ μ 체 μΈμ΄λ₯Ό νννλ λ‘λ΄μ μ¬νμ νΉμ±κ³Ό μΈκ°κ³Όμ μ μ¬μ±μ λν μΈμ λ©΄μμ λ λκ² νκ°νλ€λ κ²μ νμΈνμλ€. κ·Έλ¬λ μ¬λκ³Όμ μνΈμμ©μ λ΄μ μμμ ν΅ν΄μλ μ΄λ¬ν ν¨κ³Όκ° λ°κ²¬λμ§ μμμΌλ©°, μ΄λ μ€ν 2μ ν¬ν¨λ μμ± λ±μ λ€λ₯Έ νΉμ§μ΄ λ‘λ΄μ λν νκ°μ μ€μν μν μ νλ€λ κ²μ μμ¬νλ€. κ·ΈλΌμλ λΆκ΅¬νκ³ , λ³Έ μ°κ΅¬λ λ‘λ΄μ μ 체 μΈμ΄κ° μ¬νμ νΉμ± λ° μΈκ°κ³Όμ μ μ¬μ±μ λν μΈμμ μ€μν μμΈμ΄ λλ€λ κ·Όκ±°λ₯Ό μ 곡νλ€. μ¬νμ νΉμ±κ³Ό μΈκ°κ³Όμ μ μ¬μ±μ μ²λμμλ κ°λ°©μ μ 체 μΈμ΄λ₯Ό νννλ λ‘λ΄μ΄ λ λκ² νκ°λμμ§λ§, μ€ν ν μΈν°λ·°μμλ λ‘λ΄μ΄ μΈλ‘μ΄ κ°μ μ μννκ³ κ³΅κ°νκΈ°λ₯Ό κΈ°λνλ κ²μΌλ‘ λνλ μ΄ μν©λ€μ μ μ ν νμμ μ 체 μΈμ΄ λν λ°°μ ν μ μλ€κ³ ν΄μν μ μλ€. μ΄μ λ°λΌ λ³Έ μ°κ΅¬μμλ λ‘λ΄ λμμ΄λλ€μ΄ λμ± λ€μν λ²μμ μμ§μμ ννν μ μλ λ‘λ΄μ κ°λ°νλλ‘ μ₯λ €νλ€. κ·Έλ λ€λ©΄ μ¬μΈν μμ§μμ λ°λ₯Έ μμ°μ€λ¬μ΄ μμ¬μν΅μ ν΅ν΄ μΈκ°μ΄ λ‘λ΄μ μ¬νμ λλ°μλ‘ μΈμν μ μμ κ²μ΄λ€.Chapter 1. Introduction 1
1. Motivation 1
2. Theoretical Background and Previous Research 3
3. Purpose of Study 12
Chapter 2. Experiment 1 13
1. Objective and Hypotheses 13
2. Methods 13
3. Results 21
4. Discussion 31
Chapter 3. Experiment 2 34
1. Objective and Hypotheses 34
2. Methods 35
3. Results 38
4. Discussion 50
Chapter 4. Conclusion 52
Chapter 5. General Discussion 54
References 60
Appendix 70
κ΅λ¬Έμ΄λ‘ 77Maste
Affect Recognition in Autism: a single case study on integrating a humanoid robot in a standard therapy.
Autism Spectrum Disorder (ASD) is a multifaceted developmental disorder that comprises a mixture of social impairments, with deficits in many areas including the theory of mind, imitation, and communication. Moreover, people with autism have difficulty in recognising and understanding emotional expressions. We are currently working on integrating a humanoid robot within the standard clinical treatment offered to children with ASD to support the therapists. In this article, using the A-B-A' single case design, we propose a robot-assisted affect recognition training and to present the results on the childβs progress during the five months of clinical experimentation. In the investigation, we tested the generalization of learning and the long-term maintenance of new skills via the NEPSY-II affection recognition sub-test. The results of this single case study suggest the feasibility and effectiveness of using a humanoid robot to assist with emotion recognition training in children with ASD
Acceptability of the transitional wearable companion β+meβ in typical children: a pilot study
This work presents the results of the first experimentation of +me-the first prototype of
Transitional Wearable Companionβrun on 15 typically developed (TD) children with ages
between 8 and 34 months. +me is an interactive device that looks like a teddy bear that
can be worn around the neck, has touch sensors, can emit appealing lights and sounds,
and has input-output contingencies that can be regulated with a tablet via Bluetooth.
The participants were engaged in social play activities involving both the device and
an adult experimenter. +me was designed with the objective of exploiting its intrinsic
allure as an attractive toy to stimulate social interactions (e.g., eye contact, turn taking,
imitation, social smiles), an aspect potentially helpful in the therapy of Autism Spectrum
Disorders (ASD) and other Pervasive Developmental Disorders (PDD). The main purpose
of this preliminary study is to evaluate the general acceptability of the toy by TD children,
observing the elicited behaviors in preparation for future experiments involving children
with ASD and other PDD. First observations, based on video recording and scoring,
show that +me stimulates good social engagement in TD children, especially when their
age is higher than 24 months
Virtual Reality Games for Motor Rehabilitation
This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any productβs acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion
Are future psychologists willing to accept and use a humanoid robot in their practice? Italian and English students' perspective.
Despite general scepticism from care professionals, social robotics research is providing evidence of successful application in education and rehabilitation in clinical psychology practice.
In this article, we investigate the cultural influences of English and Italian psychology students in the perception of usefulness and intention to use a robot as an instrument for future clinical practice and, secondly, the modality of presentation of the robot by comparing oral versus video presentation. To this end, we surveyed 158 Italian and British-English psychology students after an interactive demonstration using a humanoid robot to evaluate the social robotβs acceptance and use. The Italians were positive, while the English were negative towards the perceived usefulness and intention to use the robot in psychological practice in the near future. However, most English and Italian respondents felt they did not have the necessary abilities to make good use of the robot. We concluded that it is necessary to provide psychology students with further knowledge and practical skills regarding social robotics, which could facilitate the adoption and use of this technology in clinical settings
Design of a Huggable Social Robot with Affective Expressions Using Projected Images
We introduce Pepita, a caricatured huggable robot capable of sensing and conveying affective expressions by means of tangible gesture recognition and projected avatars. This study covers the design criteria, implementation and performance evaluation of the different characteristics of the form and function of this robot. The evaluation involves: (1) the exploratory study of the different features of the device, (2) design and performance evaluation of sensors for affective interaction employing touch, and (3) design and implementation of affective feedback using projected avatars. Results showed that the hug detection worked well for the intended application and the affective expressions made with projected avatars were appropriated for this robot. The questionnaires analyzing usersβ perception provide us with insights to guide the future designs of similar interfaces
Developing an Affect-Aware Rear-Projected Robotic Agent
Social (or Sociable) robots are designed to interact with people in a natural and interpersonal manner. They are becoming an integrated part of our daily lives and have achieved positive outcomes in several applications such as education, health care, quality of life, entertainment, etc. Despite significant progress towards the development of realistic social robotic agents, a number of problems remain to be solved. First, current social robots either lack enough ability to have deep social interaction with human, or they are very expensive to build and maintain. Second, current social robots have yet to reach the full emotional and social capabilities necessary for rich and robust interaction with human beings. To address these problems, this dissertation presents the development of a low-cost, flexible, affect-aware rear-projected robotic agent (called ExpressionBot), that is designed to support verbal and non-verbal communication between the robot and humans, with the goal of closely modeling the dynamics of natural face-to-face communication.
The developed robotic platform uses state-of-the-art character animation technologies to create an animated human face (aka avatar) that is capable of showing facial expressions, realistic eye movement, and accurate visual speech, and then project this avatar onto a face-shaped translucent mask. The mask and the projector are then rigged onto a neck mechanism that can move like a human head. Since an animation is projected onto a mask, the robotic face is highly flexible research tool, mechanically simple, and low-cost to design, build and maintain compared with mechatronic and android faces. The results of our comprehensive Human-Robot Interaction (HRI) studies illustrate the benefits and values of the proposed rear-projected robotic platform over a virtual-agent with the same animation displayed on a 2D computer screen. The results indicate that ExpressionBot is well accepted by users, with some advantages in expressing facial expressions more accurately and perceiving mutual eye gaze contact.
To improve social capabilities of the robot and create an expressive and empathic social agent (affect-aware) which is capable of interpreting users\u27 emotional facial expressions, we developed a new Deep Neural Networks (DNN) architecture for Facial Expression Recognition (FER). The proposed DNN was initially trained on seven well-known publicly available databases, and obtained significantly better than, or comparable to, traditional convolutional neural networks or other state-of-the-art methods in both accuracy and learning time. Since the performance of the automated FER system highly depends on its training data, and the eventual goal of the proposed robotic platform is to interact with users in an uncontrolled environment, a database of facial expressions in the wild (called AffectNet) was created by querying emotion-related keywords from different search engines. AffectNet contains more than 1M images with faces and 440,000 manually annotated images with facial expressions, valence, and arousal. Two DNNs were trained on AffectNet to classify the facial expression images and predict the value of valence and arousal. Various evaluation metrics show that our deep neural network approaches trained on AffectNet can perform better than conventional machine learning methods and available off-the-shelf FER systems.
We then integrated this automated FER system into spoken dialog of our robotic platform to extend and enrich the capabilities of ExpressionBot beyond spoken dialog and create an affect-aware robotic agent that can measure and infer users\u27 affect and cognition. Three social/interaction aspects (task engagement, being empathic, and likability of the robot) are measured in an experiment with the affect-aware robotic agent. The results indicate that users rated our affect-aware agent as empathic and likable as a robot in which user\u27s affect is recognized by a human (WoZ).
In summary, this dissertation presents the development and HRI studies of a perceptive, and expressive, conversational, rear-projected, life-like robotic agent (aka ExpressionBot or Ryan) that models natural face-to-face communication between human and emapthic agent. The results of our in-depth human-robot-interaction studies show that this robotic agent can serve as a model for creating the next generation of empathic social robots
Would You Trust a (Faulty) Robot? : Effects of Error, Task Type and Personality on Human-Robot Cooperation and Trust
How do mistakes made by a robot affect its trustworthiness and acceptance in human-robot collaboration? We investigate how the perception of erroneous robot behavior may influence human interaction choices and the willingness to cooperate with the robot by following a number of its unusual requests. For this purpose, we conducted an experiment in which participants interacted with a home companion robot in one of two experimental conditions: (1) the correct mode or (2) the faulty mode. Our findings reveal that, while significantly affecting subjective perceptions of the robot and assessments of its reliability and trustworthiness, the robot's performance does not seem to substantially influence participants' decisions to (not) comply with its requests. However, our results further suggest that the nature of the task requested by the robot, e.g. whether its effects are revocable as opposed to irrevocable, has a signicant im- pact on participants' willingness to follow its instructions
Understanding Large-Language Model (LLM)-powered Human-Robot Interaction
Large-language models (LLMs) hold significant promise in improving
human-robot interaction, offering advanced conversational skills and
versatility in managing diverse, open-ended user requests in various tasks and
domains. Despite the potential to transform human-robot interaction, very
little is known about the distinctive design requirements for utilizing LLMs in
robots, which may differ from text and voice interaction and vary by task and
context. To better understand these requirements, we conducted a user study (n
= 32) comparing an LLM-powered social robot against text- and voice-based
agents, analyzing task-based requirements in conversational tasks, including
choose, generate, execute, and negotiate. Our findings show that LLM-powered
robots elevate expectations for sophisticated non-verbal cues and excel in
connection-building and deliberation, but fall short in logical communication
and may induce anxiety. We provide design implications both for robots
integrating LLMs and for fine-tuning LLMs for use with robots.Comment: 10 pages, 4 figures. Callie Y. Kim and Christine P. Lee contributed
equally to the work. To be published in Proceedings of the 2024 ACM/IEEE
International Conference on Human-Robot Interaction (HRI '24), March 11--14,
2024, Boulder, CO, US
- β¦