55 research outputs found

    Responses to human-like artificial agents : effects of user and agent characteristics

    Get PDF

    Ambient Lights Influence Perception and Decision-Making

    Get PDF
    Today's computers are becoming ever more versatile. They are used in various applications, such as for education, entertainment, and information services. In other words, computers are often required to not only inform users of information but also communicate with them socially. Previous studies explored the design of ambient light displays and suggested that such systems can convey information to people in the periphery of their attention without distracting them from their primary work. However, they mainly focused on using ambient lights to convey certain information. It is still unclear whether and how the lights can influence people's perception and decision-making. To explore this, we performed three experiments using a ping-pong game, Ultimatum game, and Give-Some game, in which we attached an LED strip to the front-bottom of a computer monitor and had it display a set of light expressions. Our evaluation of the results suggested that expressive lights do affect human perception and decision-making. Participants liked and anthropomorphized the computer more when it displayed light animations. Particularly, they perceived the computer as positive and friendlier when it displayed green and low intensity light animation, while red and high intensity light animation was perceived as negative and more hostile. They consequently behaved with more tolerance and cooperation to the computer when it was positive compared with when it was negative. The findings can open up possibilities for the design of ambient light systems for various applications where human-machine interaction is needed

    A psychology and game theory approach to human–robot cooperation

    Get PDF
    Social robots have great practical potentials to be applied to, for example, education, autism therapy, and commercial settings. However, currently, few commercially available social robots meet our expectations of ‘social agents’ due to their limited social skills and the abilities to maintain smooth and sophisticated rea-life social interactions. Psychological and human-centred perspectives are therefore crucial to be incorporated in for better understanding and development of social robots that can be deployed as assistants and companions to enhance human life quality. In this thesis, I present a research approach that draws together psychological literature, Open Science initiatives, and game theory paradigms, aiming to systemically and structurally investigate the cooperative and social aspects of human–robot interactions. In Chapter 1, the three components of this research approach are illustrated, with the main focus on their relevance and value in more rigorously researching human–robot interactions. Chapter 2 to 4 describe the three empirical studies that I adopted this research approach to examine the roles of contextual factors, personal factors, and robotic factors in human–robot interactions. Specifically, findings in Chapter 2 revealed that people’s cooperative decisions in prisoner’s dilemma games played with the embodied Cozmo robot were not influenced by the incentive structures of the games, contrary to the evidence from interpersonal prisoner’s dilemma games, but their decisions demonstrated a reciprocal (tit-for-tat) pattern in response to the robot opponent. In Chapter 3, we verified that this Cozmo robotic platform can displays highly recognisable emotional expressions to people, and people’s affective empathic might be counterintuitively associated with the emotion contagion effects of Cozmo’s emotional displays. Chapter 4 presents a study that examined the effects of Cozmo’s negative emotional displays on shaping people’s cooperative tendencies in prisoner’s dilemma games. We did not find evidence supporting an interaction between the effects of the robots’ emotions and people’s cooperative predispositions, which was inconsistent with our predictions informed by psychological emotion theories. However, exploratory analyses suggested that people who correctly recognised the Cozmo robots’ sad and angry expressions were less cooperative to the robots in games. Throughout the two studies on prisoner’s dilemma games played with the embodied Cozmo robots, we revealed consistent cooperative tendencies by people that cooperative willingness was the highest at the start of games and gradually decreased as more game rounds were played. In Chapter 5, I summarised the current findings and identified some limitations of these studies. Also, I outlined the future directions in relation to these topics, including further investigations into the generalisability of different robotic platforms and incorporating neurocognitive and qualitative methods for in-depth understanding of mechanisms supporting people’s cooperative willingness towards social robots. Social interactions with robots are highly dynamic and complex, which have brought about some unique challenges to robotic designers and researchers in the relevant fields. The thesis provides a point of departure for understanding cooperative willingness towards small-size social robots at a behavioural level. The research approach and empirical findings presented in the thesis could help enhance reproducibility in human–robot interaction research and more importantly, have practical implications of real-life human–robot cooperation

    Intentional Mindset Toward Robots—Open Questions and Methodological Challenges

    Get PDF
    Natural and effective interaction with humanoid robots should involve social cognitive mechanisms of the human brain that normally facilitate social interaction between humans. Recent research has indicated that the presence and efficiency of these mechanisms in human-robot interaction (HRI) might be contingent on the adoption of a set of attitudes, mindsets, and beliefs concerning the robot's inner machinery. Current research is investigating the factors that influence these mindsets, and how they affect HRI. This review focuses on a specific mindset, namely the "intentional mindset" in which intentionality is attributed to another agent. More specifically, we focus on the concept of adopting the intentional stance toward robots, i.e., the tendency to predict and explain the robots' behavior with reference to mental states. We discuss the relationship between adoption of intentional stance and lower-level mechanisms of social cognition, and we provide a critical evaluation of research methods currently employed in this field, highlighting common pitfalls in the measurement of attitudes and mindsets

    Excuse Me, Something Is Unfair! - Implications of Perceived Fairness of Service Robots

    Get PDF
    Fairness is an important aspect for individuals and teams. This also applies for human-robot interaction (HRI). Especially if intelligent robots provide services to multiple humans, humans may feel treated unfairly by robots. Most work in this area deals with the aspects of fair algorithms, task allocation and decision support. This work focuses on a different, yet little explored perspective, which looks at fairness in HRI from a human-centered perspective in human-robot teams. We present an experiment in which a service robot was responsible for distributing resources among competing team members. We investigated how different strategies of distribution influence the perceived fairness and the perception of the robot. Our study shows that humans might perceive technically efficient algorithms as unfair, especially if humans personally experience negative consequences. This also had negative impact on human perception of the robot, which should be considered in the design of future robots

    AI Governance Through a Transparency Lens

    Get PDF

    The Impact of Human–Robot Synchronization on Anthropomorphization

    Get PDF
    To elucidate the working mechanism behind anthropomorphism, this study investigated whether human participants would anthropomorphize a robot more if they move synchronously versus non-synchronously with it, and whether this is affected by which of the two initiates the movements. We tested two competing hypotheses. The feature-overlap hypothesis predicts that moving in synchrony would increase perceived self-other feature overlap, which in turn might spread activation to codes of features related to humans—which should increase anthropomorphization. In contrast, the autonomy hypothesis predicts that unpredictability increases anthropomorphization, and thus that whenever the robot initiates movements, or when the human initiates movements to which the robot moves non-synchronously, there is an increased perception of the robot as a more human-like, intentionally acting creature, which in turn should increase anthropomorphization. We performed a study with synchrony as within-subjects factor, and initiator (robot or human) as between-subjects factor. To study the impact of synchrony on self-other overlap and perception of human likeness, participants completed two tasks that served as implicit measures of state anthropomorphization, and two questionnaires that served as explicit measures of state anthropomorphization toward the robot. The two implicit measures were the joint Simon task and one-shot Dictator Game. Additionally, participants filled in a trait anthropomorphization questionnaire, to enable correction for baseline tendencies to anthropomorphize. The synchrony manipulation did not affect the joint Simon effect, although there was an effect on average reaction time (RT), where in the group in which the robot initiated the movement, RTs were slower when the human and robot moved non-synchronously. The Dictator Game offer and the state anthropomorphization questionnaires were not affected by the synchrony manipulation. There was, however, a positive correlation between current anthropomorphization of the robot and amount of money offered to it. Given that most measures were not systematically affected by our manipulation, it appears that either our design was suboptimal, or that synchronization does not affect the anthropomorphization of a robot

    Dynamic Voice Clones Elicit Consumer Trust

    Get PDF
    Platforms today are experimenting with many novel personalization technologies. We explore one such technology here, voice-based conversational agents, with a focus on consumer trust. We consider the joint role of two key design / implementation choices, namely i) disclosing an agent’s autonomous nature to the user, and ii) aesthetic personalization, in the form of user voice cloning. We report on a set of controlled experiments based on the investment game, evaluating how these design choices affect subjects’ willingness to participate in the game against an autonomous, AI-enabled partner. We find no evidence that disclosure affects trust. However, we find that the greatest level of trust is elicited when a voice-based agent employs a clone of the subject’s voice. Mechanism explorations based on post-experiment survey responses indicate that voice-cloning induces trust by eliciting a perception of homophily; the voice-clone induces subjects to personify the agent and picture it as demographically similar

    Bringing Human Robot Interaction towards _Trust and Social Engineering

    Get PDF
    Robots started their journey in books and movies; nowadays, they are becoming an important part of our daily lives: from industrial robots, passing through entertainment robots, and reaching social robotics in fields like healthcare or education. An important aspect of social robotics is the human counterpart, therefore, there is an interaction between the humans and robots. Interactions among humans are often taken for granted as, since children, we learn how to interact with each other. In robotics, this interaction is still very immature, however, critical for a successful incorporation of robots in society. Human robot interaction (HRI) is the domain that works on improving these interactions. HRI encloses many aspects, and a significant one is trust. Trust is the assumption that somebody or something is good and reliable; and it is critical for a developed society. Therefore, in a society where robots can part, the trust they could generate will be essential for cohabitation. A downside of trust is overtrusting an entity; in other words, an insufficient alignment of the projected trust and the expectations of a morally correct behaviour. This effect could negatively influence and damage the interactions between agents. In the case of humans, it is usually exploited by scammers, conmen or social engineers - who take advantage of the people's overtrust in order to manipulate them into performing actions that may not be beneficial for the victims. This thesis tries to shed light on the development of trust towards robots, how this trust could become overtrust and be exploited by social engineering techniques. More precisely, the following experiments have been carried out: (i) Treasure Hunt, in which the robot followed a social engineering framework where it gathered personal information from the participants, improved the trust and rapport with them, and at the end, it exploited that trust manipulating participants into performing a risky action. (ii) Wicked Professor, in which a very human-like robot tried to enforce its authority to make participants obey socially inappropriate requests. Most of the participants realized that the requests were morally wrong, but eventually, they succumbed to the robot'sauthority while holding the robot as morally responsible. (iii) Detective iCub, in which it was evaluated whether the robot could be endowed with the ability to detect when the human partner was lying. Deception detection is an essential skill for social engineers and professionals in the domain of education, healthcare and security. The robot achieved 75% of accuracy in the lie detection. There were also found slight differences in the behaviour exhibited by the participants when interacting with a human or a robot interrogator. Lastly, this thesis approaches the topic of privacy - a fundamental human value. With the integration of robotics and technology in our society, privacy will be affected in ways we are not used. Robots have sensors able to record and gather all kind of data, and it is possible that this information is transmitted via internet without the knowledge of the user. This is an important aspect to consider since a violation in privacy can heavily impact the trust. Summarizing, this thesis shows that robots are able to establish and improve trust during an interaction, to take advantage of overtrust and to misuse it by applying different types of social engineering techniques, such as manipulation and authority. Moreover, robots can be enabled to pick up different human cues to detect deception, which can help both, social engineers and professionals in the human sector. Nevertheless, it is of the utmost importance to make roboticists, programmers, entrepreneurs, lawyers, psychologists, and other sectors involved, aware that social robots can be highly beneficial for humans, but they could also be exploited for malicious purposes
    • 

    corecore