38 research outputs found

    Can You Activate Me? From Robots to Human Brain

    Get PDF
    L'efficacia dei robot sociali \ue8 stata ampiamente riconosciuta in diversi contesti della vita quotidiana degli umani, ma ancora poco si sa sulle aree cerebrali attivate osservando o interagendo con un robot. La ricerca che combina neuroscienze, scienze cognitive e robotica pu\uf2 fornire nuove intuizioni sia sul funzionamento del nostro cervello che sull'implementazione dei robot. Studi comportamentali sui robot sociali hanno dimostrato che la percezione sociale dei robot \ue8 influenzata da almeno due fattori: aspetto fisico e comportamento (Marchetti et al., 2018). Come possono le neuroscienze spiegare tali risultati? Ad oggi sono stati condotti studi attraverso l'utilizzo di tecniche sia EEG che fMRI per indagare le aree cerebrali coinvolte nell'interazione uomo-robot. Questi studi hanno affrontato principalmente le attivazioni cerebrali in risposta a paradigmi che coinvolgono o la performance di un'azione o la carica di una componente emotiva.The effectiveness of social robots has been widely recognized in different contexts of humans\u2019 daily life, but still little is known about the brain areas activated by observing or interacting with a robot. Research combining neuroscience, cognitive science and robotics can provide new insights into both the functioning of our brain and the implementation of robots. Behavioural studies on social robots have shown that the social perception of robots is influenced by at least two factors: physical appearance and behavior (Marchetti et al., 2018). How can neuroscience explain such findings? To date, studies have been conducted through the use of both EEG and fMRI techniques to investigate the brain areas involved in human-robot interaction. These studies have mainly addressed brain activations in response to paradigms involving either action performance or charged of an emotional component

    Robot Mindreading and the Problem of Trust

    Get PDF
    This paper raises three questions regarding the attribution of beliefs, desires, and intentions to robots. The first one is whether humans in fact engage in robot mindreading. If they do, this raises a second question: does robot mindreading foster trust towards robots? Both of these questions are empirical, and I show that the available evidence is insufficient to answer them. Now, if we assume that the answer to both questions is affirmative, a third and more important question arises: should developers and engineers promote robot mindreading in view of their stated goal of enhancing transparency? My worry here is that by attempting to make robots more mind-readable, they are abandoning the project of understanding automatic decision processes. Features that enhance mind-readability are prone to make the factors that determine automatic decisions even more opaque than they already are. And current strategies to eliminate opacity do not enhance mind-readability. The last part of the paper discusses different ways to analyze this apparent trade-off and suggests that a possible solution must adopt tolerable degrees of opacity that depend on pragmatic factors connected to the level of trust required for the intended uses of the robot

    Robot Vulnerability and the Elicitation of User Empathy

    Get PDF
    This paper describes a between-subjects Amazon Mechanical Turk study (n = 220) that investigated how a robot’s affective narrative influences its ability to elicit empathy in human observers. We first conducted a pilot study to develop and validate the robot’s affective narratives. Then, in the full study, the robot used one of three different affective narrative strategies (funny, sad, neutral) while becoming less functional at its shopping task over the course of the interaction. As the functionality of the robot degraded, participants were repeatedly asked if they were willing to help the robot. The results showed that conveying a sad narrative significantly influenced the participants’ willingness to help the robot throughout the interaction and determined whether participants felt empathetic toward the robot throughout the interaction. Furthermore, a higher amount of past experience with robots also increased the participants’ willingness to help the robot. This work suggests that affective narratives can be useful in short-term interactions that benefit from emotional connections between humans and robot

    Robot Vulnerability and the Elicitation of User Empathy

    Full text link
    This paper describes a between-subjects Amazon Mechanical Turk study (n = 220) that investigated how a robot's affective narrative influences its ability to elicit empathy in human observers. We first conducted a pilot study to develop and validate the robot's affective narratives. Then, in the full study, the robot used one of three different affective narrative strategies (funny, sad, neutral) while becoming less functional at its shopping task over the course of the interaction. As the functionality of the robot degraded, participants were repeatedly asked if they were willing to help the robot. The results showed that conveying a sad narrative significantly influenced the participants' willingness to help the robot throughout the interaction and determined whether participants felt empathetic toward the robot throughout the interaction. Furthermore, a higher amount of past experience with robots also increased the participants' willingness to help the robot. This work suggests that affective narratives can be useful in short-term interactions that benefit from emotional connections between humans and robots.Comment: Published by and copyright protected by IEEE, 8 pages, 4 figures, 31st IEEE International Conference on Robot & Human Interactive Communication (RO-MAN 2022

    Trusting Intentions Towards Robots in Healthcare: A Theoretical Framework

    Get PDF
    Within the next decade, robots (intelligent agents that are able to perform tasks normally requiring human intelligence) may become more popular when delivering healthcare services to patients. The use of robots in this way may be daunting for some members of the public, who may not understand this technology and deem it untrustworthy. Others may be excited to use and trust robots to support their healthcare needs. It is argued that (1) context plays an integral role in Information Systems (IS) research and (2) technology demonstrating anthropomorphic or system-like features impact the extent to which an individual trusts the technology. Yet, there is little research which integrates these two concepts within one study in healthcare. To address this gap, we develop a theoretical framework that considers trusting intentions towards robots based on the interaction of humans and robots within the contextual landscape of delivering healthcare services. This article presents a theory-based approach to developing effective trustworthy intelligent agents at the intersection of IS and Healthcare

    Young children’s empathy towards robot dog in relative to stuffed toy dog

    Get PDF
    This study examined young children’s empathy towards interacted entity and none interacted agent, and whether the interaction or the appearance of the agent is more relevant on children’s the empathy, the entities include an interacted robot, a stuffed toy dog and a stone. Preschoolers (5-6 years of age, N=69) watched videos of three agents, including agent introducing cuts, agent struck by human hands cuts, agent placing in a box struck by human hands cuts. All of these three agents are non-living entities, an interacted robot dog with metal surface, a non-interacted stuffed toy dog (appearance alike a real dog), a stone. The preschoolers were required to ask a list of questions to obtain the data indicating their empathy towards each agent. The results revealed that the young children ascribe more anthropomorphism to robot dog relative to stuffed toy dog, while the empathy to both have no significance difference

    Human-Robot Interaction: Mapping Literature Review and Network Analysis

    Get PDF
    Organizations increasingly adopt social robots as additions to real-life workforces, which requires knowledge of how humans react to and work with robots. The longstanding research on Human-Robot Interaction (HRI) offers relevant insights, but the existing literature reviews are limited in their ability to guide theory development and practitioners in sustainably employing social robots because the reviews lack a systematic synthesis of HRI concepts, relationships, and ensuing effects. This study offers a mapping review of the past ten years of HRI research. With the analysis of 68 peer-reviewed journal articles, we identify shifting foci, for example, towards more application-specific empirical investigations, and the most prominent concepts and relationships investigated in connection with social robots, for example, robot appearance. The results offer Information Systems scholars and practitioners an initial knowledge base and nuanced insights into key predictors and outcome variables that can hinder and foster social robot adoption in the workplace

    The perception of emotion in artificial agents

    Get PDF
    Given recent technological developments in robotics, artificial intelligence and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions and vocal tone. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here we provide an overview of the current state of emotion expression and perception in social robotics, as well as a clear articulation of the challenges and guiding principles to be addressed as we move ever closer to truly emotional artificial agents

    Effects of Victim Gendering and Humanness on People’s Responses to the Physical Abuse of Humanlike Agents

    Get PDF
    With the deployment of robots in public realms, researchers are seeing more cases of abusive disinhibition towards robots. Because robots embody gendered identities, poor navigation of antisocial dynamics may reinforce or exacerbate gender-based marginalization. Consequently, it is essential for robots to recognize and effectively head off abuse. Given extensions of gendered biases to robotic agents, as well as associations between an agent\u27s human likeness and the experiential capacity attributed to it, we quasi-manipulated the victim\u27s humanness (human vs. robot) and gendering (via the inclusion of stereotypically masculine vs. feminine cues in their presentation) across four video-recorded reproductions of the interaction. Analysis from 422 participants, each of whom watched one of the four videos, indicates that intensity of emotional distress felt by an observer is associated with their gender identification and support for social stratification, along with the victim\u27s gendering—further underscoring the criticality of robots\u27 social intelligence

    Social cognition in the age of human–robot interaction

    Get PDF
    Artificial intelligence advances have led to robots endowed with increasingly sophisticated social abilities. These machines speak to our innate desire to perceive social cues in the environment, as well as the promise of robots enhancing our daily lives. However, a strong mismatch still exists between our expectations and the reality of social robots. We argue that careful delineation of the neurocognitive mechanisms supporting human–robot interaction will enable us to gather insights critical for optimising social encounters between humans and robots. To achieve this, the field must incorporate human neuroscience tools including mobile neuroimaging to explore long-term, embodied human–robot interaction in situ. New analytical neuroimaging approaches will enable characterisation of social cognition representations on a finer scale using sensitive and appropriate categorical comparisons (human, animal, tool, or object). The future of social robotics is undeniably exciting, and insights from human neuroscience research will bring us closer to interacting and collaborating with socially sophisticated robots
    corecore