70 research outputs found

    Human-Robot Interactions: Insights from Experimental and Evolutionary Social Sciences

    Get PDF
    Experimental research in the realm of human-robot interactions has focused on the behavioral and psychological influences affecting human interaction and cooperation with robots. A robot is loosely defined as a device designed to perform agentic tasks autonomously or under remote control, often replicating or assisting human actions. Robots can vary widely in form, ranging from simple assembly line machines performing repetitive actions to advanced systems with no moving parts but with artificial intelligence (AI) capable of learning, problem-solving, communicating, and adapting to diverse environments and human interactions. Applications of experimental human-robot interaction research include the design, development, and implementation of robotic technologies that better align with human preferences, behaviors, and societal needs. As such, a central goal of experimental research on human-robot interactions is to better understand how trust is developed and maintained. A number of studies suggest that humans trust and act toward robots as they do towards humans, applying social norms and inferring agentic intent (Rai and Diermeier, 2015). While many robots are harmless and even helpful, some robots may reduce their human partner’s wages, security, or welfare and should not be trusted (Taddeo, McCutcheon and Floridi, 2019; Acemoglu and Restrepo, 2020; Alekseev, 2020). For example, more than half of all internet traffic is generated by bots, the majority of which are \u27bad bots\u27 (Imperva, 2016). Despite the hazards, robotic technologies are already transforming our everyday lives and finding their way into important domains such as healthcare, transportation, manufacturing, customer service, education, and disaster relief (Meyerson et al., 2023)

    Exploring Human Compliance Toward a Package Delivery Robot

    Get PDF
    Human-Robot Interaction (HRI) research on combat robots and autonomous carsdemonstrate faulty robots significantly decrease trust. However, HRI studies consistently show people overtrust domestic robots in households, emergency evacuation scenarios, and building security. This thesis presents how two theories, cognitive dissonance and selective attention, confound domestic HRI scenarios and uses the theory to design a novel HRI scenario with a package delivery robot in a public setting. Over 40 undergraduates were recruited within a university library to follow a package delivery robot to three stops, under the guise of “testing its navigation around people.” The second delivery was an open office which appeared private. Without labeling the packages, in 15 trials only 2 individuals entered the room at the second stop, whereas a pair of participants were much more likely to enter the room. Labeling the packages significantly increased the likelihood individuals would enter the office. The third stop was at the end of a long, isolated hallway blocked by a door marked “Emergency Exit Only. Alarm will Sound.” No one seriously thought about opening the door. Nonverbal robot prods such as waiting one minute or nudging the door were perceived as malfunctioning behavior. To demonstrate selective attention, a second route led to an emergency exit door in a public computer lab, with the intended destination an office several feet away. When the robot communicated with beeps only 45% of individuals noticed the emergency exit door. No one noticed the emergency exit door when the robot used speech commands, although its qualitative rating significantly improved. In conclusion, this thesis shows robots must make explicit requests to generate overtrust. Explicit interactions increase participant engagement with the robot, which increases selective attention towards their environment

    Mixed Initiative Systems for Human-Swarm Interaction: Opportunities and Challenges

    Full text link
    Human-swarm interaction (HSI) involves a number of human factors impacting human behaviour throughout the interaction. As the technologies used within HSI advance, it is more tempting to increase the level of swarm autonomy within the interaction to reduce the workload on humans. Yet, the prospective negative effects of high levels of autonomy on human situational awareness can hinder this process. Flexible autonomy aims at trading-off these effects by changing the level of autonomy within the interaction when required; with mixed-initiatives combining human preferences and automation's recommendations to select an appropriate level of autonomy at a certain point of time. However, the effective implementation of mixed-initiative systems raises fundamental questions on how to combine human preferences and automation recommendations, how to realise the selected level of autonomy, and what the future impacts on the cognitive states of a human are. We explore open challenges that hamper the process of developing effective flexible autonomy. We then highlight the potential benefits of using system modelling techniques in HSI by illustrating how they provide HSI designers with an opportunity to evaluate different strategies for assessing the state of the mission and for adapting the level of autonomy within the interaction to maximise mission success metrics.Comment: Author version, accepted at the 2018 IEEE Annual Systems Modelling Conference, Canberra, Australi

    The impact of peoples' personal dispositions and personalities on their trust of robots in an emergency scenario

    Get PDF
    Humans should be able to trust that they can safely interact with their home companion robot. However, robots can exhibit occasional mechanical, programming or functional errors. We hypothesise that the severity of the consequences and the timing of a robot's different types of erroneous behaviours during an interaction may have different impacts on users' attitudes towards a domestic robot. First, we investigated human users' perceptions of the severity of various categories of potential errors that are likely to be exhibited by a domestic robot. Second, we used an interactive storyboard to evaluate participants' degree of trust in the robot after it performed tasks either correctly, or with 'small' or 'big' errors. Finally, we analysed the correlation between participants' responses regarding their personality, predisposition to trust other humans, their perceptions of robots, and their interaction with the robot. We conclude that there is correlation between the magnitude of an error performed by a robot and the corresponding loss of trust by the human towards the robot. Moreover we observed that some traits of participants' personalities (conscientiousness and agreeableness) and their disposition of trusting other humans (benevolence) significantly increased their tendency to trust a robot more during an emergency scenario.Peer reviewe

    Modeling Evacuee Behavior for Robot-Guided Emergency Evacuation

    Full text link
    This paper considers the problem of developing suitable behavior models of human evacuees during a robot-guided emergency evacuation. We describe our recent research developing behavior models of evacuees and potential future uses of these models. This paper considers how behavior models can contribute to the development and design of emergency evacuation simulations in order to improve social navigation during an evacuation.Comment: Presented at Social Robot Navigation: Advances and Evaluation. In conjunction with: IEEE International Conference on Robotics and Automation, ICRA 202

    Overtrusting robots: Setting a research agenda to mitigate overtrust in automation

    Get PDF
    There is increasing attention given to the concept of trustworthiness for artificial intelligence and robotics. However, trust is highly context-dependent, varies among cultures, and requires reflection on others’ trustworthiness, appraising whether there is enough evidence to conclude that these agents deserve to be trusted. Moreover, little research exists on what happens when too much trust is placed in robots and autonomous systems. Conceptual clarity and a shared framework for approaching overtrust are missing. In this contribution, we offer an overview of pressing topics in the context of overtrust and robots and autonomous systems. Our review mobilizes insights solicited from in-depth conversations from a multidisciplinary workshop on the subject of trust in human–robot interaction (HRI), held at a leading robotics conference in 2020. A broad range of participants brought in their expertise, allowing the formulation of a forward-looking research agenda on overtrust and automation biases in robotics and autonomous systems. Key points include the need for multidisciplinary understandings that are situated in an eco-system perspective, the consideration of adjacent concepts such as deception and anthropomorphization, a connection to ongoing legal discussions through the topic of liability, and a socially embedded understanding of overtrust in education and literacy matters. The article integrates diverse literature and provides a ground for common understanding for overtrust in the context of HRI.publishedVersio

    Bringing Human Robot Interaction towards _Trust and Social Engineering

    Get PDF
    Robots started their journey in books and movies; nowadays, they are becoming an important part of our daily lives: from industrial robots, passing through entertainment robots, and reaching social robotics in fields like healthcare or education. An important aspect of social robotics is the human counterpart, therefore, there is an interaction between the humans and robots. Interactions among humans are often taken for granted as, since children, we learn how to interact with each other. In robotics, this interaction is still very immature, however, critical for a successful incorporation of robots in society. Human robot interaction (HRI) is the domain that works on improving these interactions. HRI encloses many aspects, and a significant one is trust. Trust is the assumption that somebody or something is good and reliable; and it is critical for a developed society. Therefore, in a society where robots can part, the trust they could generate will be essential for cohabitation. A downside of trust is overtrusting an entity; in other words, an insufficient alignment of the projected trust and the expectations of a morally correct behaviour. This effect could negatively influence and damage the interactions between agents. In the case of humans, it is usually exploited by scammers, conmen or social engineers - who take advantage of the people's overtrust in order to manipulate them into performing actions that may not be beneficial for the victims. This thesis tries to shed light on the development of trust towards robots, how this trust could become overtrust and be exploited by social engineering techniques. More precisely, the following experiments have been carried out: (i) Treasure Hunt, in which the robot followed a social engineering framework where it gathered personal information from the participants, improved the trust and rapport with them, and at the end, it exploited that trust manipulating participants into performing a risky action. (ii) Wicked Professor, in which a very human-like robot tried to enforce its authority to make participants obey socially inappropriate requests. Most of the participants realized that the requests were morally wrong, but eventually, they succumbed to the robot'sauthority while holding the robot as morally responsible. (iii) Detective iCub, in which it was evaluated whether the robot could be endowed with the ability to detect when the human partner was lying. Deception detection is an essential skill for social engineers and professionals in the domain of education, healthcare and security. The robot achieved 75% of accuracy in the lie detection. There were also found slight differences in the behaviour exhibited by the participants when interacting with a human or a robot interrogator. Lastly, this thesis approaches the topic of privacy - a fundamental human value. With the integration of robotics and technology in our society, privacy will be affected in ways we are not used. Robots have sensors able to record and gather all kind of data, and it is possible that this information is transmitted via internet without the knowledge of the user. This is an important aspect to consider since a violation in privacy can heavily impact the trust. Summarizing, this thesis shows that robots are able to establish and improve trust during an interaction, to take advantage of overtrust and to misuse it by applying different types of social engineering techniques, such as manipulation and authority. Moreover, robots can be enabled to pick up different human cues to detect deception, which can help both, social engineers and professionals in the human sector. Nevertheless, it is of the utmost importance to make roboticists, programmers, entrepreneurs, lawyers, psychologists, and other sectors involved, aware that social robots can be highly beneficial for humans, but they could also be exploited for malicious purposes

    Do People Change their Behavior when the Handler is next to the Robot?

    Get PDF
    It is increasingly common for people to work alongside robots in a variety of situations. When a robot is completing a task, the handler of the robot may be present. It is important to know how people interact with the robot when the handler is next to the robot. Our study focuses on whether handler’s presence can affect human’s behavior toward the robot. Our experiment targets two different scenarios (handler present and handler absent) in order to find out human’s behavior change toward the robot. Results show that in the handler present scenario, people are less willing to interact with the robot. However, when people do interact with the robot, they tend to interact with both the handler and the robot. This suggests that researchers should consider the presence of a handler when designing for human-robot interactions

    Taxonomy of Trust-Relevant Failures and Mitigation Strategies

    Get PDF
    We develop a taxonomy that categorizes HRI failure types and their impact on trust to structure the broad range of knowledge contributions. We further identify research gaps in order to support fellow researchers in the development of trustworthy robots. Studying trust repair in HRI has only recently been given more interest and we propose a taxonomy of potential trust violations and suitable repair strategies to support researchers during the development of interaction scenarios. The taxonomy distinguishes four failure types: Design, System, Expectation, and User failures and outlines potential mitigation strategies. Based on these failures, strategies for autonomous failure detection and repair are presented, employing explanation, verification and validation techniques. Finally, a research agenda for HRI is outlined, discussing identified gaps related to the relation of failures and HR-trust
    corecore