19 research outputs found

    A Model for Synthesizing a Combined Verbal and Nonverbal Behavior Based on Personality Traits in Human-Robot Interaction

    Get PDF
    International audienceIn Human-Robot Interaction (HRI) scenarios, an intelligent robot should be able to synthesize an appropriate behavior adapted to human profile (i.e., personality). Recent research studies discussed the effect of personality traits on human verbal and nonverbal behaviors. The dynamic characteristics of the generated gestures and postures during the nonverbal communication can differ according to personality traits, which similarly can influence the verbal content of human speech. This research tries to map human verbal behavior to a corresponding verbal and nonverbal combined robot behavior based on the extraversion-introversion personality dimension. We explore the human-robot personality matching aspect and the similarity attraction principle, in addition to the different effects of the adapted combined robot behavior expressed through speech and gestures, and the adapted speech-only robot behavior, on interaction. Experiments with the humanoid NAO robot are reported

    Learning robot policies using a high-level abstraction persona-behaviour simulator

    Get PDF
    2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting /republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksCollecting data in Human-Robot Interaction for training learning agents might be a hard task to accomplish. This is especially true when the target users are older adults with dementia since this usually requires hours of interactions and puts quite a lot of workload on the user. This paper addresses the problem of importing the Personas technique from HRI to create fictional patients’ profiles. We propose a Persona-Behaviour Simulator tool that provides, with high-level abstraction, user’s actions during an HRI task, and we apply it to cognitive training exercises for older adults with dementia. It consists of a Persona Definition that characterizes a patient along four dimensions and a Task Engine that provides information regarding the task complexity. We build a simulated environment where the high-level user’s actions are provided by the simulator and the robot initial policy is learned using a Q-learning algorithm. The results show that the current simulator provides a reasonable initial policy for a defined Persona profile. Moreover, the learned robot assistance has proved to be robust to potential changes in the user’s behaviour. In this way, we can speed up the fine-tuning of the rough policy during the real interactions to tailor the assistance to the given user. We believe the presented approach can be easily extended to account for other types of HRI tasks; for example, when input data is required to train a learning algorithm, but data collection is very expensive or unfeasible. We advocate that simulation is a convenient tool in these cases.Peer ReviewedPostprint (author's final draft

    Multimodal Adapted Robot Behavior Synthesis within a Narrative Human-Robot Interaction

    Get PDF
    International audienceIn human-human interaction, three modalities of communication (i.e., verbal, nonverbal, and paraverbal) are naturally coordinated so as to enhance the meaning of the conveyed message. In this paper, we try to create a similar coordination between these modalities of communication in order to make the robot behave as naturally as possible. The proposed system uses a group of videos in order to elicit specific target emotions in a human user, upon which interactive narratives will start (i.e., interactive discussions between the participant and the robot around each video's content). During each interaction experiment, the humanoid expressive ALICE robot engages and generates an adapted multimodal behavior to the emotional content of the projected video using speech, head-arm metaphoric gestures, and/or facial expressions. The interactive speech of the robot is synthesized using Mary-TTS (text to speech toolkit), which is used-in parallel-to generate adapted head-arm gestures [1]. This synthesized multimodal robot behavior is evaluated by the interacting human at the end of each emotion-eliciting experiment. The obtained results validate the positive effect of the generated robot behavior multimodality on interaction

    Modification of Gesture-Determined-Dynamic Function with Consideration of Margins for Motion Planning of Humanoid Robots

    Full text link
    The gesture-determined-dynamic function (GDDF) offers an effective way to handle the control problems of humanoid robots. Specifically, GDDF is utilized to constrain the movements of dual arms of humanoid robots and steer specific gestures to conduct demanding tasks under certain conditions. However, there is still a deficiency in this scheme. Through experiments, we found that the joints of the dual arms, which can be regarded as the redundant manipulators, could exceed their limits slightly at the joint angle level. The performance straightly depends on the parameters designed beforehand for the GDDF, which causes a lack of adaptability to the practical applications of this method. In this paper, a modified scheme of GDDF with consideration of margins (MGDDF) is proposed. This MGDDF scheme is based on quadratic programming (QP) framework, which is widely applied to solving the redundancy resolution problems of robot arms. Moreover, three margins are introduced in the proposed MGDDF scheme to avoid joint limits. With consideration of these margins, the joints of manipulators of the humanoid robots will not exceed their limits, and the potential damages which might be caused by exceeding limits will be completely avoided. Computer simulations conducted on MATLAB further verify the feasibility and superiority of the proposed MGDDF scheme

    Efficiency of speech and iconic gesture integration for robotic and human communicators - a direct comparison

    Get PDF
    © 2015 IEEE. Co-verbal gestures are an important part of human communication, improving its efficiency for information conveyance. A key component of such improvement is the observer's ability to integrate information from the two communication channels, speech and gesture. Whether such integration also occurs when the multi-modal communication information is produced by a humanoid robot, and whether it is as efficient as for a human communicator, is an open question. Here, we present an experiment which, using a fully within subjects design, shows that for a range of iconic gestures, speech and gesture integration occurs with similar efficiency for human and for robot communicators. The gestures for this study were produced on an Aldebaran Robotics NAO robot platform with a Kinect based tele-operation system. We also show that our system is able to produce a range of iconic gestures that are understood by participants in unimodal (gesture only) communication, as well as being efficiently integrated with speech. Hence, we demonstrate the utility of iconic gestures for robotic communicators

    An Online Fuzzy-Based Approach for Human Emotions Detection: An Overview on the Human Cognitive Model of Understanding and Generating Multimodal Actions

    Get PDF
    International audienceAn intelligent robot needs to be able to understand human emotions, and to understand and generate actions through cognitive systems that operate in a similar way to human cognition. In this chapter, we mainly focus on developing an online incremental learning system of emotions using Takagi-Sugeno (TS) fuzzy model. Additionally, we present a general overview for understanding and generating multimodal actions from the cognitive point of view. The main objective of this system is to detect whether the observed emotion needs a new corresponding multi-modal action to be generated in case it constitutes a new emotion cluster not learnt before, or it can be attributed to one of the existing actions in memory in case it belongs to an existing cluster

    시각적, 언어적 피드백을 통한 대화형 에이전트의 성격 표현 및 수행 과제에 따른 성격의 선호도

    Get PDF
    학위논문(석사)--서울대학교 대학원 :사회과학대학 언론정보학과,2019. 8. 이준환.대화형 에이전트의 심리적이고 감성적인 능력이 인간과 컴퓨터의 자연스러운 관계 형성을 위해 필요로 된다. 대화형 에이전트의 부자연스러운 표현과 반응은 사용자들에게 오히려 반감을 줄 수 있으며, 관계에 부정적인 영향을 끼친다. 감성 컴퓨팅 분야에서 주로 감정을 적용해 이를 해결했다면, 본 연구에서는 성격을 부여함으로써 대화형 에이전트의 자연스러운 피드백과 반응을 표현하고자 한다. 본 연구에서는 대화형 에이전트의 성격을 어떻게 표현할 수 있을지에 대해 탐구했다. 성격 표현 요소들로 선정된 요소들은 시각적 피드백과 언어적 요소들이다. 피험자 간 설계 방식으로, 실험을 실시했는데, 스터디 1에서는 다른 시각적 피드백들에 따른 다섯 가지 성격의 인식을 측정했다. 스터디 2에서는 다른 성별의 목소리와 언어적 요소들에 따른 다섯 가지 성격 인식을 측정했다. 또한, 특정 성격들이 업무수행에 더 적합하다는 관점을 적용하여, 스터디 3에서는 대화형 에이전트가 수행하는 과제들과 성격들에 따라 사용자들의 선호도와 인지한 지적 능력을 측정했다. 스터디 1, 2의 연구 결과에 따르면 시각적 피드백의 색깔에 상관없이 움직임 정도에 따라 사용자들이 인식하는 성격이 달라짐을 확인할 수 있었다. 5가지 성격들 중에, 우호성(agreeableness)을 제외한 성격들에 따른 적합한 언어적 요소들을 확인할 수 있었다. 스터디 3의 연구 결과에 따르면, 대화형 에이전트가 사회적 수행 과제를 제외한 다른 과제들을 수행할 때, 창의성(openness)이 가장 선호되고, 가장 지적으로 여겨졌다. 사회적 과제를 수행하는 대화형 에이전트일 경우에만 외향성이 가장 선호되고, 지능적으로 여겨졌다. 연구 결과들에 따르면, 빠르고, 활발한 움직임의 표현 요소들이 더 뚜렷하며, 긍정적인 성격으로 인식된다. 그리고 대화형 에이전트의 성격에 대한 인식이 목소리의 성별에 따라 달라졌다. 또한, 다양하고, 표현적인 요소들을 사용하는 것이 긍정적인 성격들을 표현하기에 적합하다. 사람들이 대화형 에이전트를 인식할 때 사람들을 인식할 때와 비슷한 패턴들을 적용함을 알 수 있었다.Conversational agents with psychological abilities could facilitate natural communication between humans and computers while conversational agents unnatural expressions and reactions could frustrate users. This research applies the concept of personality to conversational agents to implement natural feedback and reactions. This study explores how to express conversational agents personalities. The selected cues were visual feedback and verbal cues. As a between-participants study design, Study 1 measured the perception of five personalities toward different visual feedback and Study 2 measured the perception of five personalities depending on different verbal cues with voices of different genders. Concerning that certain personalities of conversational agents were considered more suitable for certain tasks, Study 3 investigated the user preference and perceived intelligence toward conversational agents with different personalities and tasks. The study results demonstrate that different motions of visual feedback were highly influential on the perceptions of personalities. Color was not a decisive factor. In addition, except for agreeableness, different verbal cues were perceived as different personalities. For conversational agents performing service, physical, and office tasks, openness was the most preferred and perceived as intelligent. In case of social tasks, the extravert conversational agents were the most preferred and perceived as intelligent. Fast and active visual feedback is suitable to design conversational agents with distinct and positive personalities. In addition, perceptions of conversational agents personalities differed according to the gender of voice. Diverse and expressive cues were suitable for expressing positive personalities. Interactions between conversational agents and humans demonstrated similar patterns of perception as human-human interactions.1. Introduction 1 2. Related work 6 2.1. Expressing machines internal states in Human-Computer Interaction 6 2.2. Personality expressions of computers and interfaces 8 2.3. Combinations of diverse cues 10 2.4. The agents personality and task match 11 3. Study 1 13 3.1. Overview 13 3.2. Study 1-1 14 3.2.1. Experimental materials 14 3.2.2. Experimental setting 16 3.2.3. Results 18 3.3. Study 1-2 25 3.3.1. Experimental materials 25 3.3.2. Experimental setting 26 3.3.3. Results 28 3.4. Results 31 3.5. Discussion 32 3.6. Limitations & Future Studies 36 4. Study 2 39 4.1. Overview 39 4.2. Research questions 40 4.3. Method 41 4.3.1. Experimental materials 41 4.3.2. Experimental setting 44 4.4. Results 45 4.4.1. Result 1: Pitch levels and gender of voices 45 4.4.2. Result 2: Emotionality and Gender of voices 47 4.4.3. Result 3: Wordiness and gender of voices 53 4.4.4. Result 4: Speed and gender of voices 54 4.4.5. Result 5: Questioning and gender of voices 60 4.5. Overall results 63 4.6. Discussion 65 4.7. Limitations 67 5. Study 3 69 5.1. Overview 69 5.2. Method 70 5. Study 3 69 5.2.1. Experimental Materials 70 5.2.2. Manipulation check 73 5.2.3. Experimental Setting 74 5.3. Results 75 5.3.1. Office task 75 5.3.2. Social task 77 5.3.3. Service task 80 5.3.4. Physical task 82 5.4. Discussion 85 6. Conclusions 87 7. Discussion for overall study 91 References 93 Appendix 1. Big Five personality questionnaires 101 Appendix 2. God speed scale questionnaires 102 국문 초록 103Maste

    A Meta-Analysis of Human Personality and Robot Acceptance in Human-Robot Interaction

    Full text link
    Human personality has been identified as a predictor of robot acceptance in the human robot interaction (HRI) literature. Despite this, the HRI literature has provided mixed support for this assertion. To better understand the relationship between human personality and robot acceptance, this paper conducts a meta-analysis of 26 studies. Results found a positive relationship between human personality and robot acceptance. However, this relationship varied greatly by the specific personality trait along with the study sample’s age, gender diversity, task, and global region. This meta-analysis also identified gaps in the literature. Namely, additional studies are needed that investigate both the big five personality traits and other personality traits, examine a more diverse age range, and utilize samples from previously unexamined regions of the globe.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/165339/1/Esterwood et al. 2021 (one column).pdfDescription of Esterwood et al. 2021 (one column).pdf : Preprint one column versionSEL

    Personality Perception of Robot Avatar Teleoperators in Solo and Dyadic Tasks

    Get PDF
    Humanoid robot avatars are a potential new telecommunication tool, whereby a user is remotely represented by a robot that replicates their arm, head, and possible face movements. They have been shown to have a number of benefits over more traditional media such as phones or video calls. However, using a teleoperated humanoid as a communication medium inherently changes the appearance of the operator, and appearance-based stereotypes are used in interpersonal judgments (whether consciously or unconsciously). One such judgment that plays a key role in how people interact is personality. Hence, we have been motivated to investigate if and how using a robot avatar alters the perceived personality of teleoperators. To do so, we carried out two studies where participants performed 3 communication tasks, solo in study one and dyadic in study two, and were recorded on video both with and without robot mediation. Judges recruited using online crowdsourcing services then made personality judgments of the participants in the video clips. We observed that judges were able to make internally consistent trait judgments in both communication conditions. However, judge agreement was affected by robot mediation, although which traits were affected was highly task dependent. Our most important finding was that in dyadic tasks personality trait perception was shifted to incorporate cues relating to the robot’s appearance when it was used to communicate. Our findings have important implications for telepresence robot design and personality expression in autonomous robots.This work was funded by the EPSRC under its IDEAS Factory Sandpits call on Digital Personhood (Grant Ref: EP/L00416X/1)
    corecore