7,026 research outputs found

    A Model for Synthesizing a Combined Verbal and Nonverbal Behavior Based on Personality Traits in Human-Robot Interaction

    Get PDF
    International audienceIn Human-Robot Interaction (HRI) scenarios, an intelligent robot should be able to synthesize an appropriate behavior adapted to human profile (i.e., personality). Recent research studies discussed the effect of personality traits on human verbal and nonverbal behaviors. The dynamic characteristics of the generated gestures and postures during the nonverbal communication can differ according to personality traits, which similarly can influence the verbal content of human speech. This research tries to map human verbal behavior to a corresponding verbal and nonverbal combined robot behavior based on the extraversion-introversion personality dimension. We explore the human-robot personality matching aspect and the similarity attraction principle, in addition to the different effects of the adapted combined robot behavior expressed through speech and gestures, and the adapted speech-only robot behavior, on interaction. Experiments with the humanoid NAO robot are reported

    Exploring cultural factors in human-robot interaction: A matter of personality?

    Get PDF
    This paper proposes an experimental study to investigate task-dependence and cultural-background dependence of the personality trait attribution on humanoid robots. In Human-Robot Interaction, as well as in Human-Agent Interaction research, the attribution of personality traits towards intelligent agents has already been researched intensively in terms of the social similarity or complementary rule. These two rules imply that humans either tend to like others with similar personality traits or complementary personality traits more. Even though state of the art literature suggests that similarity attraction happens for virtual agents, and complementary attraction for robots, there are many contradictions in the findings. We assume that searching the explanation for personality trait attribution in the similarity and complementary rule does not take into account important contextual factors. Just like people equate certain personality types to certain professions, we expect that people may have certain personality expectations depending on the context of the task the robot carries out. Because professions have different social meaning in different national culture, we also expect that these task-dependent personality preferences differ across cultures. Therefore suggest an experiment that considers the task-context and the cultural background of users

    Computers that smile: Humor in the interface

    Get PDF
    It is certainly not the case that wen we consider research on the role of human characteristics in the user interface of computers that no attention has been paid to the role of humor. However, when we compare efforts in this area with efforts and experiments that attempt to demonstrate the positive role of general emotion modelling in the user interface, then we must conclude that this attention is still low. As we all know, sometimes the computer is a source of frustration rather than a source of enjoyment. And indeed we see research projects that aim at recognizing a user’s frustration, rather than his enjoyment. However, rather than detecting frustration, and maybe reacting on it in a humorous way, we would like to prevent frustration by making interaction with a computer more natural and more enjoyable. For that reason we are working on multimodal interaction and embodied conversational agents. In the interaction with embodied conversational agents verbal and nonverbal communication are equally important. Multimodal emotion display and detection are among our advanced research issues, and investigations in the role of humor in human-computer interaction is one of them

    Accepting the Familiar: The Effect of Perceived Similarity with AI Agents on Intention to Use and the Mediating Effect of IT Identity

    Get PDF
    With the rise and integration of AI technologies within organizations, our understanding of the impact of this technology on individuals remains limited. Although the IS use literature provides important guidance for organization to increase employees’ willingness to work with new technology, the utilitarian view of prior IS use research limits its application considering the new evolving social interaction between humans and AI agents. We contribute to the IS use literature by implementing a social view to understand the impact of AI agents on an individual’s perception and behavior. By focusing on the main design dimensions of AI agents, we propose a framework that utilizes social psychology theories to explain the impact of those design dimensions on individuals. Specifically, we build on Similarity Attraction Theory to propose an AI similarity-continuance model that aims to explain how similarity with AI agents influence individuals’ IT identity and intention to continue working with it. Through an online brainstorming experiment, we found that similarity with AI agents indeed has a positive impact on IT identity and on the intention to continue working with the AI agent
    • 

    corecore