2 research outputs found

    Designing Trustworthy Product Recommendation Virtual Agents Operating Positive Emotion and Having Copious Amount of Knowledge

    Get PDF
    Anthropomorphic agents used in online-shopping need to be trusted by users so that users feel comfortable buying products. In this paper, we propose a model for designing trustworthy agents by assuming two factors of trust, that is, emotion and knowledgeableness perceived. Our hypothesis is that when a user feels happy and perceives an agent as being highly knowledgeable, a high level of trust results between the user and agent. We conducted four experiments with participants to verify this hypothesis by preparing transition operators utilizing emotional contagion and knowledgeable utterances. As a result, we verified that users' internal states transitioned as expected and that the two factors significantly influenced their trust states

    Investigating human perceptions of trust in robots for safe HRI in home environments

    No full text
    In an era in which robots take a part in our lives in daily living activities, humans have to trust robots in home environments. We aim to create guidelines that allow humans to trust robots to be able to look after their well-being by adopting human-like behaviours. We want to study a Human-Robot Interaction (HRI) to assess whether a certain degree of transparency in the robots actions, the use of social behaviours and natural communications can affect humans' sense of trust and companionship towards the robots. However, trust can change over time due to different factors, e.g. due to perceiving erroneous robot behaviors. We believe that the magnitude and the timing of the error during an interaction may have different impacts resulting in different scales of loss of trust and of restoring lost trust
    corecore