142 research outputs found
Bringing Human Robot Interaction towards _Trust and Social Engineering
Robots started their journey in books and movies; nowadays, they are becoming an
important part of our daily lives: from industrial robots, passing through entertainment
robots, and reaching social robotics in fields like healthcare or education.
An important aspect of social robotics is the human counterpart, therefore, there is
an interaction between the humans and robots. Interactions among humans are often
taken for granted as, since children, we learn how to interact with each other. In robotics,
this interaction is still very immature, however, critical for a successful incorporation of
robots in society. Human robot interaction (HRI) is the domain that works on improving
these interactions.
HRI encloses many aspects, and a significant one is trust. Trust is the assumption that
somebody or something is good and reliable; and it is critical for a developed society.
Therefore, in a society where robots can part, the trust they could generate will be essential
for cohabitation.
A downside of trust is overtrusting an entity; in other words, an insufficient alignment
of the projected trust and the expectations of a morally correct behaviour. This effect
could negatively influence and damage the interactions between agents. In the case of
humans, it is usually exploited by scammers, conmen or social engineers - who take
advantage of the people's overtrust in order to manipulate them into performing actions
that may not be beneficial for the victims.
This thesis tries to shed light on the development of trust towards robots, how this
trust could become overtrust and be exploited by social engineering techniques. More
precisely, the following experiments have been carried out: (i) Treasure Hunt, in which
the robot followed a social engineering framework where it gathered personal
information from the participants, improved the trust and rapport with them, and at the
end, it exploited that trust manipulating participants into performing a risky action.
(ii) Wicked Professor, in which a very human-like robot tried to enforce its authority to
make participants obey socially inappropriate requests. Most of the participants realized
that the requests were morally wrong, but eventually, they succumbed to the robot'sauthority while holding the robot as morally responsible. (iii) Detective iCub, in which it
was evaluated whether the robot could be endowed with the ability to detect when the
human partner was lying. Deception detection is an essential skill for social engineers and
professionals in the domain of education, healthcare and security. The robot achieved
75% of accuracy in the lie detection. There were also found slight differences in the
behaviour exhibited by the participants when interacting with a human or a robot
interrogator.
Lastly, this thesis approaches the topic of privacy - a fundamental human value. With
the integration of robotics and technology in our society, privacy will be affected in ways
we are not used. Robots have sensors able to record and gather all kind of data, and it is
possible that this information is transmitted via internet without the knowledge of the
user. This is an important aspect to consider since a violation in privacy can heavily
impact the trust.
Summarizing, this thesis shows that robots are able to establish and improve trust
during an interaction, to take advantage of overtrust and to misuse it by applying different
types of social engineering techniques, such as manipulation and authority. Moreover,
robots can be enabled to pick up different human cues to detect deception, which can
help both, social engineers and professionals in the human sector. Nevertheless, it is of
the utmost importance to make roboticists, programmers, entrepreneurs, lawyers,
psychologists, and other sectors involved, aware that social robots can be highly beneficial
for humans, but they could also be exploited for malicious purposes
Overtrusting robots: Setting a research agenda to mitigate overtrust in automation
There is increasing attention given to the concept of trustworthiness for artificial intelligence and robotics. However, trust is highly context-dependent, varies among cultures, and requires reflection on others’ trustworthiness, appraising whether there is enough evidence to conclude that these agents deserve to be trusted. Moreover, little research exists on what happens when too much trust is placed in robots and autonomous systems. Conceptual clarity and a shared framework for approaching overtrust are missing. In this contribution, we offer an overview of pressing topics in the context of overtrust and robots and autonomous systems. Our review mobilizes insights solicited from in-depth conversations from a multidisciplinary workshop on the subject of trust in human–robot interaction (HRI), held at a leading robotics conference in 2020. A broad range of participants brought in their expertise, allowing the formulation of a forward-looking research agenda on overtrust and automation biases in robotics and autonomous systems. Key points include the need for multidisciplinary understandings that are situated in an eco-system perspective, the consideration of adjacent concepts such as deception and anthropomorphization, a connection to ongoing legal discussions through the topic of liability, and a socially embedded understanding of overtrust in education and literacy matters. The article integrates diverse literature and provides a ground for common understanding for overtrust in the context of HRI.publishedVersio
Exploring Human Compliance Toward a Package Delivery Robot
Human-Robot Interaction (HRI) research on combat robots and autonomous carsdemonstrate faulty robots significantly decrease trust. However, HRI studies consistently show people overtrust domestic robots in households, emergency evacuation
scenarios, and building security. This thesis presents how two theories, cognitive dissonance and selective attention, confound domestic HRI scenarios and uses the theory
to design a novel HRI scenario with a package delivery robot in a public setting.
Over 40 undergraduates were recruited within a university library to follow a
package delivery robot to three stops, under the guise of “testing its navigation around
people.” The second delivery was an open office which appeared private. Without
labeling the packages, in 15 trials only 2 individuals entered the room at the second
stop, whereas a pair of participants were much more likely to enter the room. Labeling
the packages significantly increased the likelihood individuals would enter the office.
The third stop was at the end of a long, isolated hallway blocked by a door marked
“Emergency Exit Only. Alarm will Sound.” No one seriously thought about opening
the door. Nonverbal robot prods such as waiting one minute or nudging the door were
perceived as malfunctioning behavior. To demonstrate selective attention, a second
route led to an emergency exit door in a public computer lab, with the intended
destination an office several feet away. When the robot communicated with beeps only
45% of individuals noticed the emergency exit door. No one noticed the emergency
exit door when the robot used speech commands, although its qualitative rating
significantly improved.
In conclusion, this thesis shows robots must make explicit requests to generate
overtrust. Explicit interactions increase participant engagement with the robot, which
increases selective attention towards their environment
The impact of peoples' personal dispositions and personalities on their trust of robots in an emergency scenario
Humans should be able to trust that they can safely interact with their home companion robot. However, robots can exhibit occasional mechanical, programming or functional errors. We hypothesise that the severity of the consequences and the timing of a robot's different types of erroneous behaviours during an interaction may have different impacts on users' attitudes towards a domestic robot. First, we investigated human users' perceptions of the severity of various categories of potential errors that are likely to be exhibited by a domestic robot. Second, we used an interactive storyboard to evaluate participants' degree of trust in the robot after it performed tasks either correctly, or with 'small' or 'big' errors. Finally, we analysed the correlation between participants' responses regarding their personality, predisposition to trust other humans, their perceptions of robots, and their interaction with the robot. We conclude that there is correlation between the magnitude of an error performed by a robot and the corresponding loss of trust by the human towards the robot. Moreover we observed that some traits of participants' personalities (conscientiousness and agreeableness) and their disposition of trusting other humans (benevolence) significantly increased their tendency to trust a robot more during an emergency scenario.Peer reviewe
Evaluating people's perceptions of trust in a robot in a repeated interactions study
Funding Information: Acknowledgment. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 642667 (Safety Enables Cooperation in Uncertain Robotic Environments - SECURE). KD acknowledges funding from the Canada 150 Research Chairs Program. Publisher Copyright: © 2020, Springer Nature Switzerland AG This is a post-peer-review, pre-copyedit version of an article published of 'Rossi A., Dautenhahn K., Koay K.L., Walters M.L., Holthaus P. (2020) Evaluating People’s Perceptions of Trust in a Robot in a Repeated Interactions Study. In: Wagner A.R. et al. (eds) Social Robotics. ICSR 2020. Lecture Notes in Computer Science, vol 12483. Springer, Cham. https://doi.org/10.1007/978-3-030-62056-1_38'Trust has been established to be a key factor in fostering human-robot interactions. However, trust can change overtime according to different factors, including a breach of trust due to a robot’s error. In this exploratory study, we observed people’s interactions with a companion robot in a real house, adapted for human-robot interaction experimentation, over three weeks. The interactions happened in six scenarios in which a robot performed different tasks under two different conditions. Each condition included fourteen tasks performed by the robot, either correctly, or with errors with severe consequences on the first or last day of interaction. At the end of each experimental condition, participants were presented with an emergency scenario to evaluate their trust in the robot. We evaluated participants’ trust in the robot by observing their decision to trust the robot during the emergency scenario, and by collecting their views through questionnaires. We concluded that there is a correlation between the timing of an error with severe consequences performed by the robot and the corresponding loss of trust of the human in the robot. In particular, people’s trust is subjected to the initial mental formation
Context-Adaptive Management of Drivers’ Trust in Automated Vehicles
Automated vehicles (AVs) that intelligently interact with drivers must build a trustworthy relationship with them. A calibrated level of trust is fundamental for the AV and the driver to collaborate as a team. Techniques that allow AVs to perceive drivers’ trust from drivers’ behaviors and react accordingly are, therefore, needed for context-aware systems designed to avoid trust miscalibrations. This letter proposes a framework for the management of drivers’ trust in AVs. The framework is based on the identification of trust miscalibrations (when drivers’ undertrust or overtrust the AV) and on the activation of different communication styles to encourage or warn the driver when deemed necessary. Our results show that the management framework is effective, increasing (decreasing) trust of undertrusting (overtrusting) drivers, and reducing the average trust miscalibration time periods by approximately 40%. The framework is applicable for the design of SAE Level 3 automated driving systems and has the potential to improve the performance and safety of driver–AV teams.U.S. Army CCDC/GVSCAutomotive Research CenterNational Science FoundationPeer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/162571/1/Azevedo-Sa et al. 2020 with doi.pdfSEL
Cyborg Justice and the Risk of Technological-Legal Lock-In
Although Artificial Intelligence (AI) is already of use to litigants and legal practitioners, we must be cautious and deliberate in incorporating AI into the common law judicial process. Human beings and machine systems process information and reach conclusions in fundamentally different ways, with AI being particularly ill-suited for the rule application and value balancing required of human judges. Nor will “cyborg justice”—hybrid human/AI judicial systems that attempt to marry the best of human and machine decisionmaking and minimize the drawbacks of both—be a panacea. While such systems would ideally maximize the strengths of human and machine intelligence, they might also magnify the drawbacks of both. They also raise distinct teaming risks associated with overtrust, undertrust, and interface design errors, as well as second-order structural side effects.One such side effect is “technological–legal lock-in.” Translating rules and decisionmaking procedures into algorithms grants them a new kind of permanency, which creates an additional barrier to legal evolution. In augmenting the common law’s extant conservative bent, hybrid human/AI judicial systems risk fostering legal stagnation and an attendant loss of judicial legitimacy
The Importance of Distrust in AI
In recent years the use of Artificial Intelligence (AI) has become
increasingly prevalent in a growing number of fields. As AI systems are being
adopted in more high-stakes areas such as medicine and finance, ensuring that
they are trustworthy is of increasing importance. A concern that is prominently
addressed by the development and application of explainability methods, which
are purported to increase trust from its users and wider society. While an
increase in trust may be desirable, an analysis of literature from different
research fields shows that an exclusive focus on increasing trust may not be
warranted. Something which is well exemplified by the recent development in AI
chatbots, which while highly coherent tend to make up facts. In this
contribution, we investigate the concepts of trust, trustworthiness, and user
reliance.
In order to foster appropriate reliance on AI we need to prevent both disuse
of these systems as well as overtrust. From our analysis of research on
interpersonal trust, trust in automation, and trust in (X)AI, we identify the
potential merit of the distinction between trust and distrust (in AI). We
propose that alongside trust a healthy amount of distrust is of additional
value for mitigating disuse and overtrust. We argue that by considering and
evaluating both trust and distrust, we can ensure that users can rely
appropriately on trustworthy AI, which can both be useful as well as fallible.Comment: This preprint has not undergone peer review or any post-submission
improvements or corrections. The version of records of this contribution is
published in Explainable Artificial Intelligence First World Conference, xAI
2023, Lisbon, Portugal, July 26-28, 2023, Proceedings, Part III (CCIS, volume
1903) and is available at https://doi.org/10.1007/978-3-031-44070-
Safety, Trust, and Ethics Considerations for Human-AI Teaming in Aerospace Control
Designing a safe, trusted, and ethical AI may be practically impossible;
however, designing AI with safe, trusted, and ethical use in mind is possible
and necessary in safety and mission-critical domains like aerospace. Safe,
trusted, and ethical use of AI are often used interchangeably; however, a
system can be safely used but not trusted or ethical, have a trusted use that
is not safe or ethical, and have an ethical use that is not safe or trusted.
This manuscript serves as a primer to illuminate the nuanced differences
between these concepts, with a specific focus on applications of Human-AI
teaming in aerospace system control, where humans may be in, on, or
out-of-the-loop of decision-making
- …