21 research outputs found

    Suggestions for a revision of the European smart robot liability regime

    Get PDF
    In recent years, the need for regulation of robots and Artificial Intelligence, together with the urgency of reshaping the civil liability framework, has become apparent in Europe. Although the matter of civil liability has been the subject of many studies and resolutions, multiple attempts to harmonize EU tort law have been unsuccessful so far, and only the liability of producers for defective products has been harmonized so far. In 2021, by publishing the AI Act proposal, the European Commission has reached the goal to regulate AI at the European level, classifying smart robots as ”high risk systems”. This new piece of legislation, albeit tackling important issues, does not focus on liability rules. However, regulating the responsibility of developers and manufacturers of robots and AI systems, in order to avoid a fragmented legal framework across the EU and an uneven application of liability rules in each Member State, is still an important issue that raises many concerns in the industry sector. In particular, deep learning techniques need to be carefully regulated, as they challenge the traditional liability paradigm: it is often not possible to know the reason underneath the output given by those models, and nor the programmer nor the manufacturer are able to predict the AI behaviour. For this reason, some authors have argued that we need to take liability away from producers and programmers when robots are capable of acting autonomously from their original design, while others have proposed a strict liability regime. This article explores liability issues about AI and robots with regards to users, producers, and programmers, especially when the use of machine learning techniques is involved, and suggests some regulatory solutions for European lawmakers

    The perception of emotion in artificial agents

    Get PDF
    Given recent technological developments in robotics, artificial intelligence and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions and vocal tone. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here we provide an overview of the current state of emotion expression and perception in social robotics, as well as a clear articulation of the challenges and guiding principles to be addressed as we move ever closer to truly emotional artificial agents

    (I’m) Happy to Help (You): The Impact of Personal Pronoun Use In Customer-Firm Interactions

    Get PDF
    In responding to customer questions or complaints, should marketing agents linguistically “put the customer first” by using certain personal pronouns? Customer orientation theory, managerial literature, and surveys of managers, customer service representatives, and consumers suggest that firm agents should emphasize how “we” (the firm) serve “you” (the customer), while deemphasizing “I” (the agent) in these customer-firm interactions. We find evidence of this language pattern in use at over 40 firms. However, we theorize and demonstrate that these personal pronoun emphases are often sub-optimal. Five studies using lab experiments and field data reveal that firm agents who refer to themselves using “I” rather than “we” pronouns increase customer perceptions that the agent feels and acts on their behalf. In turn, these positive perceptions of empathy and agency lead to increased customer satisfaction, purchase intentions, and purchase behavior. Further, we find that customer-referencing “you” pronouns have little impact on these outcomes, and can sometimes have negative consequences. These findings enhance our understanding of how, when, and why language use impacts social perception and behavior, and provide valuable insights for marketers. &nbsp

    Sensorimotor Oscillations During a Reciprocal Touch Paradigm With a Human or Robot Partner

    Get PDF
    Robots provide an opportunity to extend research on the cognitive, perceptual, and neural processes involved in social interaction. This study examined how sensorimotor oscillatory electroencephalogram (EEG) activity can be influenced by the perceived nature of a task partner – human or robot – during a novel “reciprocal touch” paradigm. Twenty adult participants viewed a demonstration of a robot that could “feel” tactile stimulation through a haptic sensor on its hand and “see” changes in light through a photoreceptor at the level of the eyes; the robot responded to touch or changes in light by moving a contralateral digit. During EEG collection, participants engaged in a joint task that involved sending tactile stimulation to a partner (robot or human) and receiving tactile stimulation back. Tactile stimulation sent by the participant was initiated by a button press and was delivered 1500 ms later via an inflatable membrane on the hand of the human or on the haptic sensor of the robot partner. Stimulation to the participant’s finger (from the partner) was sent on a fixed schedule, regardless of partner type. We analyzed activity of the sensorimotor mu rhythm during anticipation of tactile stimulation to the right hand, comparing mu activity at central electrode sites when participants believed that tactile stimulation was initiated by a robot or a human, and to trials in which “nobody” received stimulation. There was a significant difference in contralateral mu rhythm activity between anticipating stimulation from a human partner and the “nobody” condition. This effect was less pronounced for anticipation of stimulation from the robot partner. Analyses also examined beta rhythm responses to the execution of the button press, comparing oscillatory activity when participants sent tactile stimulation to the robot or the human partner. The extent of beta rebound at frontocentral electrode sites following the button press differed between conditions, with a significantly larger increase in beta power when participants sent tactile stimulation to a robot partner compared to the human partner. This increase in beta power may reflect greater predictably in event outcomes. This new paradigm and the novel findings advance the neuroscientific study of human–robot interaction

    Sharing Stress With a Robot: What Would a Robot Say?

    Get PDF
    With the prevalence of mental health problems today, designing human-robot interaction for mental health intervention is not only possible, but critical. The current experiment examined how three types of robot disclosure (emotional, technical, and by-proxy) affect robot perception and human disclosure behavior during a stress-sharing activity. Emotional robot disclosure resulted in the lowest robot perceived safety. Post-hoc analysis revealed that increased perceived stress predicted reduced human disclosure, user satisfaction, robot likability, and future robot use. Negative attitudes toward robots also predicted reduced intention for future robot use. This work informs on the possible design of robot disclosure, as well as how individual attributes, such as perceived stress, can impact human robot interaction in a mental health context

    From automata to animate beings: the scope and limits of attributing socialness to artificial agents

    Get PDF
    Understanding the mechanisms and consequences of attributing socialness to artificial agents has important implications for how we can use technology to lead more productive and fulfilling lives. Here, we integrate recent findings on the factors that shape behavioral and brain mechanisms that support social interactions between humans and artificial agents. We review how visual features of an agent, as well as knowledge factors within the human observer, shape attributions across dimensions of socialness. We explore how anthropomorphism and dehumanization further influence how we perceive and interact with artificial agents. Based on these findings, we argue that the cognitive reconstruction within the human observer is likely to be far more crucial in shaping our interactions with artificial agents than previously thought, while the artificial agent's visual features are possibly of lesser importance. We combine these findings to provide an integrative theoretical account based on the “like me” hypothesis, and discuss the key role played by the Theory‐of‐Mind network, especially the temporal parietal junction, in the shift from mechanistic to social attributions. We conclude by highlighting outstanding questions on the impact of long‐term interactions with artificial agents on the behavioral and brain mechanisms of attributing socialness to these agents

    Lenguaje y cognición : arquitectura de atributos en la comunicación entre seres humanos y máquinas para la generación de empatía

    Get PDF
    Cada vez más expertos hablan de la “Sociedad de los robots” (Robot Society). Este tipo de máquinas cada vez están más presentes en nuestras vidas personales y profesionales. Y están llegando también –como no podría ser de otra manera- al mundo del periodismo. Agencias como Associated Press o medios como Forbes, Los Angeles Times, ProPublica y la televisión pública finlandesa YLE ya están utilizando robots para la generación de contenido automatizada. Pero la mera utilización de inteligencia artificial para la generación de contenidos no implica necesariamente cumplir con las expectativas de los profesionales ni de las audiencias. Una cuestión que, de momento, no ha sido afrontada con la dedicación que requiere un aspecto tan importante del diseño de los robots. En esta investigación, planteamos la posible creación futura de un/a presentador/a virtual, capaz de recoger la actualidad y “servirla” en forma de noticias. En resumen, si tuviésemos que elegir un robot que nos transmitiese la actualidad y los contenidos de los medios de comunicación, ¿qué características debería tener? ¿Podemos diseñar e implementar una “Susana Grisso” o un “Iñaki Gabilondo” cibernéticos, capaces de generar engagement por parte de sus audiencias? Nuestro análisis trata de definir los atributos principales del modelo de comunicación de los robots con los humanos, a través del ejemplo del cine. El objetivo principal es identificar cuáles de esos atributos son relevantes para las audiencias, de forma que el engagement de estas con la máquina y la empatía percibida sean mayores..
    corecore