1,715 research outputs found

    Can a robot catch you lying? A machine learning system to detect lies during interactions.

    Get PDF
    Deception is a complex social skill present in human interactions. Many social professions such as teachers, therapists and law enforcement officers leverage on deception detection techniques to support their working activities. Robots with the ability to autonomously detect deception could provide an important aid to human-human and human-robot interactions. The objective of this work is to demonstrate that it is possible to develop a lie detection system that could be implemented on robots. To this goal, we focus on human human and human robot interaction to understand if there is a difference in the behavior of the participants when lying to a robot or to a human. Participants were shown short movies of robberies and then interrogated by a human and by a humanoid robot "detectives". According to the instructions, subjects provided veridical responses to half of the question and false replies to the other half. Behavioral variables such as eye movements, time to respond and eloquence were measured during the task, while personality traits were assessed before experiment initiation. Participant's behavior showed strong similarities during the interaction with the human and the humanoid. Moreover, the behavioral features were used to train and test a lie detection algorithm. The results show that the selected behavioral variables are valid markers of deception both in human-human and in human-robot interactions and could be exploited to effectively enable robots to detect lies.

    The social brain: allowing humans to boldly go where no other species has been

    Get PDF
    The biological basis of complex human social interaction and communication has been illuminated through a coming together of various methods and disciplines. Among these are comparative studies of other species, studies of disorders of social cognition and developmental psychology. The use of neuroimaging and computational models has given weight to speculations about the evolution of social behaviour and culture in human societies. We highlight some networks of the social brain relevant to two-person interactions and consider the social signals between interacting partners that activate these networks.Wemake a case for distinguishing between signals that automatically trigger interaction and cooperation and ostensive signals that are used deliberately.We suggest that this ostensive signalling is needed for ‘closing the loop’ in two-person interactions, where the partners each know that they have the intention to communicate. The use of deliberate social signals can serve to increase reputation and trust and facilitates teaching. This is likely to be a critical factor in the steep cultural ascent ofmankind

    Bringing Human Robot Interaction towards _Trust and Social Engineering

    Get PDF
    Robots started their journey in books and movies; nowadays, they are becoming an important part of our daily lives: from industrial robots, passing through entertainment robots, and reaching social robotics in fields like healthcare or education. An important aspect of social robotics is the human counterpart, therefore, there is an interaction between the humans and robots. Interactions among humans are often taken for granted as, since children, we learn how to interact with each other. In robotics, this interaction is still very immature, however, critical for a successful incorporation of robots in society. Human robot interaction (HRI) is the domain that works on improving these interactions. HRI encloses many aspects, and a significant one is trust. Trust is the assumption that somebody or something is good and reliable; and it is critical for a developed society. Therefore, in a society where robots can part, the trust they could generate will be essential for cohabitation. A downside of trust is overtrusting an entity; in other words, an insufficient alignment of the projected trust and the expectations of a morally correct behaviour. This effect could negatively influence and damage the interactions between agents. In the case of humans, it is usually exploited by scammers, conmen or social engineers - who take advantage of the people's overtrust in order to manipulate them into performing actions that may not be beneficial for the victims. This thesis tries to shed light on the development of trust towards robots, how this trust could become overtrust and be exploited by social engineering techniques. More precisely, the following experiments have been carried out: (i) Treasure Hunt, in which the robot followed a social engineering framework where it gathered personal information from the participants, improved the trust and rapport with them, and at the end, it exploited that trust manipulating participants into performing a risky action. (ii) Wicked Professor, in which a very human-like robot tried to enforce its authority to make participants obey socially inappropriate requests. Most of the participants realized that the requests were morally wrong, but eventually, they succumbed to the robot'sauthority while holding the robot as morally responsible. (iii) Detective iCub, in which it was evaluated whether the robot could be endowed with the ability to detect when the human partner was lying. Deception detection is an essential skill for social engineers and professionals in the domain of education, healthcare and security. The robot achieved 75% of accuracy in the lie detection. There were also found slight differences in the behaviour exhibited by the participants when interacting with a human or a robot interrogator. Lastly, this thesis approaches the topic of privacy - a fundamental human value. With the integration of robotics and technology in our society, privacy will be affected in ways we are not used. Robots have sensors able to record and gather all kind of data, and it is possible that this information is transmitted via internet without the knowledge of the user. This is an important aspect to consider since a violation in privacy can heavily impact the trust. Summarizing, this thesis shows that robots are able to establish and improve trust during an interaction, to take advantage of overtrust and to misuse it by applying different types of social engineering techniques, such as manipulation and authority. Moreover, robots can be enabled to pick up different human cues to detect deception, which can help both, social engineers and professionals in the human sector. Nevertheless, it is of the utmost importance to make roboticists, programmers, entrepreneurs, lawyers, psychologists, and other sectors involved, aware that social robots can be highly beneficial for humans, but they could also be exploited for malicious purposes

    A Comparison of Avatar-, Video-, and Robot-Mediated Interaction on Users’ Trust in Expertise

    Get PDF
    Communication technologies are becoming increasingly diverse in form and functionality. A central concern is the ability to detect whether others are trustworthy. Judgments of trustworthiness rely, in part, on assessments of non-verbal cues, which are affected by media representations. In this research, we compared trust formation on three media representations. We presented 24 participants with advisors represented by two of the three alternate formats: video, avatar, or robot. Unknown to the participants, one was an expert, and the other was a non-expert. We observed participants’ advice-seeking behavior under risk as an indicator of their trust in the advisor. We found that most participants preferred seeking advice from the expert, but we also found a tendency for seeking robot or video advice. Avatar advice, in contrast, was more rarely sought. Users’ self-reports support these findings. These results suggest that when users make trust assessments, the physical presence of the robot representation might compensate for the lack of identity cues

    AI-Powered Robots for Libraries: Exploratory Questions

    Get PDF
    With recent developments in machine learning, a subfield of artificial intelligence (AI), it seems no longer extraordinary to think that we will be soon living in the world with many robots. While the term, ‘a robot’ conjures up the image of a humanoid machine, a robot can take many forms ranging from a drone, an autonomous vehicle, to a therapeutic baby seal-bot. But what counts as a robot, and what kind of robots should we expect to see at libraries? AI has made it possible to make a robot intelligent and autonomous in performing tasks not only mechanical but also cognitive, such as driving, natural language processing, translation, and face recognition. The capability of AI-powered robots far exceeds that of other simpler and less sophisticated machines. How we will be interacting with these robots once they came to be in the world with us is an interesting question. Humans have a strong tendency to anthropomorphize creatures and objects they interact with, many of which are less complex than a robot. This suggests that we will be quite susceptible to projecting motives, emotions, and other human traits onto robots. For this reason, the adoption of robots raises unique concerns regarding their safety, morality, their impact on social relationships and norms, and their potential to be used as a means for manipulation and deception. This paper explores these concerns related to the adoption of robots. It also discusses what kind of robots we may come to see at libraries in the near future, what kind of human-robot interactions may take place at libraries, and what type of human-robot relationship may facilitate or impede a library robot’s involvement in our information-seeking activities

    Deception

    Get PDF

    Averting Robot Eyes

    Get PDF
    Home robots will cause privacy harms. At the same time, they can provide beneficial services—as long as consumers trust them. This Essay evaluates potential technological solutions that could help home robots keep their promises, avert their eyes, and otherwise mitigate privacy harms. Our goals are to inform regulators of robot-related privacy harms and the available technological tools for mitigating them, and to spur technologists to employ existing tools and develop new ones by articulating principles for avoiding privacy harms. We posit that home robots will raise privacy problems of three basic types: (1) data privacy problems; (2) boundary management problems; and (3) social/relational problems. Technological design can ward off, if not fully prevent, a number of these harms. We propose five principles for home robots and privacy design: data minimization, purpose specifications, use limitations, honest anthropomorphism, and dynamic feedback and participation. We review current research into privacy-sensitive robotics, evaluating what technological solutions are feasible and where the harder problems lie. We close by contemplating legal frameworks that might encourage the implementation of such design, while also recognizing the potential costs of regulation at these early stages of the technology
    corecore