5,497 research outputs found

    Crossmodal content binding in information-processing architectures

    Get PDF
    Operating in a physical context, an intelligent robot faces two fundamental problems. First, it needs to combine information from its different sensors to form a representation of the environment that is more complete than any of its sensors on its own could provide. Second, it needs to combine high-level representations (such as those for planning and dialogue) with its sensory information, to ensure that the interpretations of these symbolic representations are grounded in the situated context. Previous approaches to this problem have used techniques such as (low-level) information fusion, ontological reasoning, and (high-level) concept learning. This paper presents a framework in which these, and other approaches, can be combined to form a shared representation of the current state of the robot in relation to its environment and other agents. Preliminary results from an implemented system are presented to illustrate how the framework supports behaviours commonly required of an intelligent robot

    Human-Machine Communication: Complete Volume. Volume 1

    Get PDF
    This is the complete volume of HMC Volume 1

    Opening Space for Theoretical, Methodological, and Empirical Issues in Human-Machine Communication

    Get PDF
    This journal offers a space dedicated to theorizing, researching empirically, and discussing human-machine communication (HMC), a new form of communication with digital interlocutors that has recently developed and has imposed the urgency to be analyzed and understood. There is the need to properly address the model of this specific communication as well as the roles, objectives, functions, experiences, practices, and identities of the interlocutors involved, both human and digital. Reading these seven articles is an advantageous intellectual exercise for entering this new field of research on Human-Machine Communication. The present volume contributes substantially both at theoretical and empirical levels by outlining this new field of research, giving new perspectives and models, and inspiring new paths of research

    Developmental changes in perceived moral standing of robots

    Get PDF
    We live in an age where robots are increasingly present in the social and moral world. Here, we explore how children and adults think about the mental lives and moral standing of robots. In Experiment 1 (N = 116), we found that children granted humans and robots with more mental life and vulnerability to harm than an anthropomorphized control (i.e., a toy bear). In Experiment 2 (N = 157), we found that, relative to children, adults ascribed less mental life and vulnerability to harm to robots. In Experiment 3 (N = 152), we modified our experiment to be within-subjects and measured beliefs concerning moral standing. Though younger children again appeared willing to assign mental capacities — particularly those related to experience (e.g., being capable of experiencing hunger) — to robots, older children and adults did so to a lesser degree. This diminished attribution of mental life tracked with diminished ratings of robot moral standing. This informs ongoing debates concerning emerging attitudes about artificial life

    Eerie Prostheses and Kinky Strap-Ons: On the Ableist Ideology of Mori’s Uncanny Valley

    Get PDF
    In his paper 'The Uncanny Valley' (1970), Masahiro Mori advises designers to avoid high degrees of human likeness in prosthetic body parts in order not to evoke uncanniness. Building on a discussion of the difference in the commonly experienced uncanniness of 'realistic' looking prosthetic hands and strap-on dildos, this paper argues that Mori's hypothesis and his approach to design are based on an essentialist concept of the human body, which is complicit in the persistence of ableist body ideologies. Reading recent empirical research on the uncanny valley in the context of Jentsch's and Freud's writing, it is suggested that the design of body-related artefacts should promote, rather than avoid, repetitious uncanny experiences. Such a project aims to diminish uncanniness through 'force of habit', thus facilitating the acceptance of a broader variety of bodies as equal

    Alienation and Recognition - The Δ Phenomenology of the Human–Social Robot Interaction (HSRI)

    Get PDF
    A crucial philosophical problem of social robots is how much they perform a kind of sociality in interacting with humans. Scholarship diverges between those who sustain that humans and social robots cannot by default have social interactions and those who argue for the possibility of an asymmetric sociality. Against this dichotomy, we argue in this paper for a holistic approach called “Δ phenomenology” of HSRI (Human–Social Robot Interaction). In the first part of the paper, we will analyse the semantics of an HSRI. This is what leads a human being (x) to assign or receive a meaning of sociality (z) by interacting with a social robot (y). Hence, we will question the ontological structure underlying HSRIs, suggesting that HSRIs may lead to a peculiar kind of user alienation. By combining all these variables, we will formulate some final recommendations for an ethics of social robots

    Sharing Stress With a Robot: What Would a Robot Say?

    Get PDF
    With the prevalence of mental health problems today, designing human-robot interaction for mental health intervention is not only possible, but critical. The current experiment examined how three types of robot disclosure (emotional, technical, and by-proxy) affect robot perception and human disclosure behavior during a stress-sharing activity. Emotional robot disclosure resulted in the lowest robot perceived safety. Post-hoc analysis revealed that increased perceived stress predicted reduced human disclosure, user satisfaction, robot likability, and future robot use. Negative attitudes toward robots also predicted reduced intention for future robot use. This work informs on the possible design of robot disclosure, as well as how individual attributes, such as perceived stress, can impact human robot interaction in a mental health context

    Moral Psychology and Artificial Agents (Part Two) : The Transhuman Connection

    Get PDF
    Part 1 concluded by introducing the concept of the new ontological category – explaining how our cognitive machinery does not have natural and intuitive understanding of robots and AIs, unlike we have for animals, tools, and plants. Here the authors review findings in the moral psychology of robotics and transhumanism. They show that many peculiarities arise from the interaction of human cognition with robots, AIs, and human enhancement technologies. Robots are treated similarly, but not completely, like humans. Some such peculiarities are explained by mind perception mechanisms. On the other hand, it seems that transhumanistic technologies like brain implants and mind uploading are condemned, and the condemnation is motivated by our innate sexual disgust sensitivity mechanisms.Peer reviewe

    Designing companions, designing tools : social robots, developers, and the elderly in Japan

    Full text link
    Ce mĂ©moire de maĂźtrise trace la gĂ©nĂ©alogie d’un robot social, de sa conception Ă  ses diffĂ©rentes utilisations et la maniĂšre dont les utilisateurs interagissent avec. A partir d’un terrain de six mois dans une start-up et deux maisons de retraite au Japon, j’interroge la crĂ©ation de Pepper, un robot social crĂ©e par la compagnie japonais SoftBank. Pepper a Ă©tĂ© crĂ©Ă© de façon Ă  ĂȘtre humanoĂŻde mais pas trop, ainsi que perçu comme adorable et charmant. Par la suite, je dĂ©cris comment Pepper et d’autres robots sociaux sont utilisĂ©s, Ă  la fois par des dĂ©veloppeurs, mais aussi par des personnes ĂągĂ©es, et je souligne une tension existante entre leur utilisation comme des compagnons et des outils. En me basant sur l’anthropologie ontologique et la phĂ©nomĂ©nologie, j’examine la construction du robot comme une entitĂ© avec laquelle il est possible d’interagir, notamment Ă  cause de sa conception en tant qu’acteur social, ontologiquement ambigu, et qui peut exprimer de l’affect. En m’intĂ©ressant aux interactions multimodales, et en particulier le toucher, je classifie trois fonctions remplies par l’interaction : dĂ©couverte, contrĂŽle, et l’expression de l’affect. Par la suite, je questionne ces actes d’agir vers et s’ils peuvent ĂȘtre compris comme une interaction, puisqu’ils n’impliquent pas que le robot soit engagĂ©. J’argumente qu’une interaction est un Ă©change de sens entre des agents engagĂ©s et incarnĂ©s. Il y a effectivement parfois un Ă©change de sens entre le robot et son utilisateur, et le robot est un artefact incarnĂ©. Cependant, seule l’impression d’intersubjectivitĂ© est nĂ©cessaire Ă  l’interaction, plutĂŽt que sa rĂ©elle prĂ©sence.This master’s thesis traces a genealogy of a social robot through its conception to its various uses and the ways users interact with it. Drawing on six months of fieldwork in a start-up and two nursing homes in Japan, I first investigate the genesis of a social robot created by SoftBank, a Japanese multinational telecommunications company. This social robot is quite humanlike, made to be cute and have an adorable personality. While developers constitute one of the user populations, this robot, along with several others, is also used by elderly residents in nursing homes. By analyzing the uses of these populations, I underline the tension between the social robot as a companion and a tool. Drawing on ontological anthropology and phenomenology I look at how the robot is constructed as an entity that can be interacted with, through its conception as an ontologically ambiguous, social actor, that can express affect. Looking at multimodal interaction, and especially touch, I then classify three functions they fulfill: discovery, control, and the expression of affect, before questioning whether this acting towards the robot that does not imply acting from the robot, can be considered a form of interaction. I argue that interaction is the exchange of meaning between embodied, engaged participants. Meaning can be exchanged between robots and humans and the robot can be seen as embodied, but only the appearance of intersubjectivity is enough, rather than its actual presence
    • 

    corecore