31,358 research outputs found
Bringing Human Robot Interaction towards _Trust and Social Engineering
Robots started their journey in books and movies; nowadays, they are becoming an
important part of our daily lives: from industrial robots, passing through entertainment
robots, and reaching social robotics in fields like healthcare or education.
An important aspect of social robotics is the human counterpart, therefore, there is
an interaction between the humans and robots. Interactions among humans are often
taken for granted as, since children, we learn how to interact with each other. In robotics,
this interaction is still very immature, however, critical for a successful incorporation of
robots in society. Human robot interaction (HRI) is the domain that works on improving
these interactions.
HRI encloses many aspects, and a significant one is trust. Trust is the assumption that
somebody or something is good and reliable; and it is critical for a developed society.
Therefore, in a society where robots can part, the trust they could generate will be essential
for cohabitation.
A downside of trust is overtrusting an entity; in other words, an insufficient alignment
of the projected trust and the expectations of a morally correct behaviour. This effect
could negatively influence and damage the interactions between agents. In the case of
humans, it is usually exploited by scammers, conmen or social engineers - who take
advantage of the people's overtrust in order to manipulate them into performing actions
that may not be beneficial for the victims.
This thesis tries to shed light on the development of trust towards robots, how this
trust could become overtrust and be exploited by social engineering techniques. More
precisely, the following experiments have been carried out: (i) Treasure Hunt, in which
the robot followed a social engineering framework where it gathered personal
information from the participants, improved the trust and rapport with them, and at the
end, it exploited that trust manipulating participants into performing a risky action.
(ii) Wicked Professor, in which a very human-like robot tried to enforce its authority to
make participants obey socially inappropriate requests. Most of the participants realized
that the requests were morally wrong, but eventually, they succumbed to the robot'sauthority while holding the robot as morally responsible. (iii) Detective iCub, in which it
was evaluated whether the robot could be endowed with the ability to detect when the
human partner was lying. Deception detection is an essential skill for social engineers and
professionals in the domain of education, healthcare and security. The robot achieved
75% of accuracy in the lie detection. There were also found slight differences in the
behaviour exhibited by the participants when interacting with a human or a robot
interrogator.
Lastly, this thesis approaches the topic of privacy - a fundamental human value. With
the integration of robotics and technology in our society, privacy will be affected in ways
we are not used. Robots have sensors able to record and gather all kind of data, and it is
possible that this information is transmitted via internet without the knowledge of the
user. This is an important aspect to consider since a violation in privacy can heavily
impact the trust.
Summarizing, this thesis shows that robots are able to establish and improve trust
during an interaction, to take advantage of overtrust and to misuse it by applying different
types of social engineering techniques, such as manipulation and authority. Moreover,
robots can be enabled to pick up different human cues to detect deception, which can
help both, social engineers and professionals in the human sector. Nevertheless, it is of
the utmost importance to make roboticists, programmers, entrepreneurs, lawyers,
psychologists, and other sectors involved, aware that social robots can be highly beneficial
for humans, but they could also be exploited for malicious purposes
Evaluating Trust and Safety in HRI : Practical Issues and Ethical Challenges
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage, and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). Copyright is held by the owner/author(s). Date of Acceptance: 11/02/2015In an effort to increase the acceptance and persuasiveness of socially assistive robots in home and healthcare environments, HRI researchers attempt to identify factors that promote human trust and perceived safety with regard to robots. Especially in collaborative contexts in which humans are requested to accept information provided by the robot and follow its suggestions, trust plays a crucial role, as it is strongly linked to persuasiveness. As a result, human- robot trust can directly affect people's willingness to cooperate with the robot, while under- or overreliance could have severe or even dangerous consequences. Problematically, investigating trust and human perceptions of safety in HRI experiments is not a straightforward task and, in light of a number of ethical concerns and risks, proves quite challenging. This position statement highlights a few of these points based on experiences from HRI practice and raises a few important questions that HRI researchers should consider.Final Accepted Versio
Robot Betrayal: a guide to the ethics of robotic deception
If a robot sends a deceptive signal to a human user, is this always and everywhere an unethical act, or might it sometimes be ethically desirable? Building upon previous work in robot ethics, this article tries to clarify and refine our understanding of the ethics of robotic deception. It does so by making three arguments. First, it argues that we need to distinguish between three main forms of robotic deception (external state deception; superficial state deception; and hidden state deception) in order to think clearly about its ethics. Second, it argues that the second type of deception – superficial state deception – is not best thought of as a form of deception, even though it is frequently criticised as such. And third, it argues that the third type of deception is best understood as a form of betrayal because doing so captures the unique ethical harm to which it gives rise, and justifies special ethical protections against its use
Towards the Safety of Human-in-the-Loop Robotics: Challenges and Opportunities for Safety Assurance of Robotic Co-Workers
The success of the human-robot co-worker team in a flexible manufacturing
environment where robots learn from demonstration heavily relies on the correct
and safe operation of the robot. How this can be achieved is a challenge that
requires addressing both technical as well as human-centric research questions.
In this paper we discuss the state of the art in safety assurance, existing as
well as emerging standards in this area, and the need for new approaches to
safety assurance in the context of learning machines. We then focus on robotic
learning from demonstration, the challenges these techniques pose to safety
assurance and indicate opportunities to integrate safety considerations into
algorithms "by design". Finally, from a human-centric perspective, we stipulate
that, to achieve high levels of safety and ultimately trust, the robotic
co-worker must meet the innate expectations of the humans it works with. It is
our aim to stimulate a discussion focused on the safety aspects of
human-in-the-loop robotics, and to foster multidisciplinary collaboration to
address the research challenges identified
Getting to know Pepper : Effects of people’s awareness of a robot’s capabilities on their trust in the robot
© 2018 Association for Computing MachineryThis work investigates how human awareness about a social robot’s capabilities is related to trusting this robot to handle different tasks. We present a user study that relates knowledge on different quality levels to participant’s ratings of trust. Secondary school pupils were asked to rate their trust in the robot after three types of exposures: a video demonstration, a live interaction, and a programming task. The study revealed that the pupils’ trust is positively affected across different domains after each session, indicating that human users trust a robot more the more awareness about the robot they have
Chief Justice Robots
Say an AI program someday passes a Turing test, because it can con-verse in a way indistinguishable from a human. And say that its develop-ers can then teach it to converse—and even present an extended persua-sive argument—in a way indistinguishable from the sort of human we call a “lawyer.” The program could thus become an AI brief-writer, ca-pable of regularly winning brief-writing competitions against human lawyers.
Once that happens (if it ever happens), this Essay argues, the same technology can be used to create AI judges, judges that we should accept as no less reliable (and more cost-effective) than human judges. If the software can create persuasive opinions, capable of regularly winning opinion-writing competitions against human judges—and if it can be adequately protected against hacking and similar attacks—we should in principle accept it as a judge, even if the opinions do not stem from human judgment
- …