59 research outputs found

    Finding AI Faces in the Moon and Armies in the Clouds : Anthropomorphizing Artificial Intelligence in Military Human-Machine Interactions

    Get PDF
    Open Access via the T&F AgreementPeer reviewedPublisher PD

    The perception of emotion in artificial agents

    Get PDF
    Given recent technological developments in robotics, artificial intelligence and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions and vocal tone. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here we provide an overview of the current state of emotion expression and perception in social robotics, as well as a clear articulation of the challenges and guiding principles to be addressed as we move ever closer to truly emotional artificial agents

    Humanization of robots: is it really such a good idea?

    Get PDF
    The aim of this review was to examine the pros and cons of humanizing social robots following a psychological perspective. As such, we had six goals. First, we defined what social robots are. Second, we clarified the meaning of humanizing social robots. Third, we presented the theoretical backgrounds for promoting humanization. Fourth, we conducted a review of empirical results of the positive effects and the negative effects of humanization on human–robot interaction (HRI). Fifth, we presented some of the political and ethical problems raised by the humanization of social robots. Lastly, we discussed the overall effects of the humanization of robots in HRI and suggested new avenues of research and development.info:eu-repo/semantics/publishedVersio

    Sympathy for Dolores: Moral Consideration for Robots Based on Virtue and Recognition

    Get PDF
    This paper motivates the idea that social robots should be credited as moral patients, building on an argumentative approach that combines virtue ethics and social recognition theory. Our proposal answers the call for a nuanced ethical evaluation of human-robot interaction that does justice to both the robustness of the social responses solicited in humans by robots and the fact that robots are designed to be used as instruments. On the one hand, we acknowledge that the instrumental nature of robots and their unsophisticated social capabilities prevent any attribution of rights to robots, which are devoid of intrinsic moral dignity and personal status. On the other hand, we argue that another form of moral consideration—not based on rights attribution—can and must be granted to robots. The reason is that relationships with robots offer to the human agents important opportunities to cultivate both vices and virtues, like social interaction with other human beings. Our argument appeals to social recognition to explain why social robots, unlike other technological artifacts, are capable of establishing with their human users quasi-social relationships as pseudo-persons. This recognition dynamic justifies seeing robots as worthy of moral consideration from a virtue ethical standpoint as it predicts the pre-reflective formation of persistent affective dispositions and behavioral habits that are capable of corrupting the human user’s character. We conclude by drawing attention to a potential paradox drawn forth by our analysis and by examining the main conceptual conundrums that our approach has to face

    Occupational health and safety issues in human-robot collaboration: State of the art and open challenges

    Get PDF
    Human-Robot Collaboration (HRC) refers to the interaction of workers and robots in a shared workspace. Owing to the integration of the industrial automation strengths with the inimitable cognitive capabilities of humans, HRC is paramount to move towards advanced and sustainable production systems. Although the overall safety of collaborative robotics has increased over time, further research efforts are needed to allow humans to operate alongside robots, with awareness and trust. Numerous safety concerns are open, and either new or enhanced technical, procedural and organizational measures have to be investigated to design and implement inherently safe and ergonomic automation solutions, aligning the systems performance and the human safety. Therefore, a bibliometric analysis and a literature review are carried out in the present paper to provide a comprehensive overview of Occupational Health and Safety (OHS) issues in HRC. As a result, the most researched topics and application areas, and the possible future lines of research are identified. Reviewed articles stress the central role played by humans during collaboration, underlining the need to integrate the human factor in the hazard analysis and risk assessment. Human-centered design and cognitive engineering principles also require further investigations to increase the worker acceptance and trust during collaboration. Deepened studies are compulsory in the healthcare sector, to investigate the social and ethical implications of HRC. Whatever the application context is, the implementation of more and more advanced technologies is fundamental to overcome the current HRC safety concerns, designing low-risk HRC systems while ensuring the system productivity

    See No Evil, Hear No Evil: How Users Blindly Overrely on Robots with Automation Bias

    Get PDF
    Recent developments in generative artificial intelligence show how quickly users carelessly adhere to intelligent systems, ignoring systems\u27 vulnerabilities and focusing on their superior capabilities. This is detrimental when system failures are ignored. This paper investigates this mindless overreliance on systems, defined as automation bias (AB), in human-robot interaction. We conducted two experimental studies (N1 = 210, N2 = 438) with social robots in a corporate setting to investigate psychological mechanisms and influencing factors of AB. Particularly, users experience perceptual and behavioral AB with the robot that is enhanced by robot competence depending on task complexity and is even stronger for emotional than analytical tasks. Surprisingly, robot reliability negatively affected AB. We also found a negative indirect-only mediation of AB on robot satisfaction. Finally, we provide implications for the appropriate use of robots to prevent employees from using them as a self-sufficient system instead of a supporting system

    Investigating Human Perceptions of Trust and Social Cues in Robots for Safe Human-Robot Interaction in Human-oriented Environments

    Get PDF
    As robots increasingly take part in daily living activities, humans will have to interact with them in domestic and other human-oriented environments. This thesis envisages a future where autonomous robots could be used as home companions to assist and collaborate with their human partners in unstructured environments without the support of any roboticist or expert. To realise such a vision, it is important to identify which factors (e.g. trust, participants’ personalities and background etc.) that influence people to accept robots’ as companions and trust the robots to look after their well-being. I am particularly interested in the possibility of robots using social behaviours and natural communications as a repair mechanism to positively influence humans’ sense of trust and companionship towards the robots. The main reason being that trust can change over time due to different factors (e.g. perceived erroneous robot behaviours). In this thesis, I provide guidelines for a robot to regain human trust by adopting certain human-like behaviours. I can expect that domestic robots will exhibit occasional mechanical, programming or functional errors, as occurs with any other electrical consumer devices. For example, these might include software errors, dropping objects due to gripper malfunctions, picking up the wrong object or showing faulty navigational skills due to unclear camera images or noisy laser scanner data respectively. It is therefore important for a domestic robot to have acceptable interactive behaviour when exhibiting and recovering from an error situation. In this context, several open questions need to be addressed regarding both individuals’ perceptions of the errors and robots, and the effects of these on people’s trust in robots. As a first step, I investigated how the severity of the consequences and the timing of a robot’s different types of erroneous behaviours during an interaction may have different impact on users’ attitudes towards a domestic robot. I concluded that there is a correlation between the magnitude of an error performed by the robot and the corresponding loss of trust of the human in the robot. In particular, people’s trust was strongly affected by robot errors that had severe consequences. This led us to investigate whether people’s awareness of robots’ functionalities may affect their trust in a robot. I found that people’s acceptance and trust in the robot may be affected by their knowledge of the robot’s capabilities and its limitations differently according the participants’ age and the robot’s embodiment. In order to deploy robots in the wild, strategies for mitigating and re-gaining people’s trust in robots in case of errors needs to be implemented. In the following three studies, I assessed if a robot with awareness of human social conventions would increase people’s trust in the robot. My findings showed that people almost blindly trusted a social and a non-social robot in scenarios with non-severe error consequences. In contrast, people that interacted with a social robot did not trust its suggestions in a scenario with a higher risk outcome. Finally, I investigated the effects of robots’ errors on people’s trust of a robot over time. The findings showed that participants’ judgement of a robot is formed during the first stage of their interaction. Therefore, people are more inclined to lose trust in a robot if it makes big errors at the beginning of the interaction. The findings from the Human-Robot Interaction experiments presented in this thesis will contribute to an advanced understanding of the trust dynamics between humans and robots for a long-lasting and successful collaboration

    A roboethics framework for the development and introduction of social assistive robots in elderly care

    Get PDF
    There is an emerging “aging phenomenon” worldwide. It is likely that we will require the introduction of assistive technologies that can assist caregivers in the exercise of elderly care. Such technologies should be designed in ways that promote high levels of human dignity and quality of life through the aging process. Social Assistive Robots (SARs) demonstrate high potential for complementing elderly care when it comes to cognitive assistance, entertainment, communication and supervision. However such close Human Robotics Interactions (HRIs) encompass a rich set of ethical scenarios that need to be addressed before SARs are introduced into mass markets. To date the HRI benchmarks of “Imitation”, “Safety”, “Autonomy”, “Privacy”, “Scalability”, “Social success” and “Understanding of the domain” are the only guidelines to inform SARs developers when developing robotic prototypes for human assistance. However such HRI benchmarks are broad and lack of theoretical background to understand potential ethical issues in elderly care. Further, there is little guidance for either developers or those involved in the provision of care, regarding the appropriate introduction of SARs.In this research the current HRI benchmarks are reviewed alongside the core ethical principles of beneficence, non-maleficence, autonomy and justice, together with a social care ethos. Based on such interpretation, practical robotics workshops were conducted in five care and extra care institutions with the direct participation of elderly groups, caregivers and relatives. “In-situ” robotics demonstrations, informal interviews and observations were conducted, investigating human behaviours, attitudes, expectations, concerns, and levels of acceptance towards the introduction of SARs in elderly care settings. Following a thematic analysis of the findings, a roboethics framework is proposed to support the research and development of SARs. The developed framework highlights the importance of selection, categorization and completion of relevant HRI benchmarks, HRI templates, HRI supervision schemes and ethical specifications for SARs applications
    corecore