2 research outputs found
Effects of Victim Gendering and Humanness on People’s Responses to the Physical Abuse of Humanlike Agents
With the deployment of robots in public realms, researchers are seeing more cases of abusive disinhibition towards robots. Because robots embody gendered identities, poor navigation of antisocial dynamics may reinforce or exacerbate gender-based marginalization. Consequently, it is essential for robots to recognize and effectively head off abuse.
Given extensions of gendered biases to robotic agents, as well as associations between an agent\u27s human likeness and the experiential capacity attributed to it, we quasi-manipulated the victim\u27s humanness (human vs. robot) and gendering (via the inclusion of stereotypically masculine vs. feminine cues in their presentation) across four video-recorded reproductions of the interaction.
Analysis from 422 participants, each of whom watched one of the four videos, indicates that intensity of emotional distress felt by an observer is associated with their gender identification and support for social stratification, along with the victim\u27s gendering—further underscoring the criticality of robots\u27 social intelligence
Using social robots to encourage honest behaviours
This thesis presents a series of studies to understand if robots can promote more honest behaviours from people, when they are tempted to behave dishonestly. In Study 1 we see that a robot just presenting gaze behaviour inhibits cheating, but a robot doing small talk, does not. In Study 2 we see that participants cheated to an equal extent when doing the task in their homes alone or with a video of a robot looking at them. In Study 3 we find that including situation awareness in a robot (showing awareness of the participant behaviour), decreased cheating across the game. In Study 4 we see that priming participants for their relational self-concept does not enhance the situation awareness effect on cheating. In study 5 and 6 we explore participants perceptions, and we see that people consider it wrong to be dishonest towards a robot. However, they would feel low levels of guilt and justify it by the robots’ lack of capabilities, presence, and a human tendency for dishonesty. When prompted to evaluate what other’s/or their own attitudes would be regarding dishonesty, manipulating the caring behaviour of a robot, it shows no effect and people in general think others would be dishonest and hold themselves in a more neutral stance. Interestingly, people that show more negative attitudes towards robots tend to report that others will act more dishonestly as well as themselves. These are important considerations for the development of robots, in the future, to work alongside with humans.Esta tese apresenta uma sĂ©rie de estudos para perceber se os robĂ´s podem promover comportamentos honestos nas pessoas. No Estudo 1 observa-se que um robĂ´ que apenas olha para o utilizador, inibe batota, mas um robĂ´ que apresenta algum comportamento verbal nĂŁo tem o mesmo efeito. No estudo 2, vemos que os participantes fazem batota tanto sozinhos, nas suas casas, como na presença de um vĂdeo de um robĂ´ que simplesmente olha. No Estudo 3 incluindo no robĂ´ a capacidade de perceber as jogadas dos participantes e reagir a elas, diminui a batota ao longo do jogo. No Estudo 4 a inclusĂŁo de um priming para o auto-conceito relacional nĂŁo aumenta o efeito encontrado no Estudo 3. Finalmente, no Estudo 5 e 6 exploram-se as perceções das pessoas, e verifica-se que consideram errado ser-se desonesto com um robĂ´, mas reportando baixos nĂveis de culpa. Justificam a desonestidade por: falta de capacidades no robĂ´, falta de presença e a existĂŞncia de uma tendĂŞncia humana para a desonestidade. Quando avaliadas as atitudes que os outros teriam ou eles prĂłprios em ser-se desonesto, manipulando o carácter afetivo do robĂ´, nĂŁo existem efeitos e as pessoas no geral reportam que os outros serĂŁo desonestos mantendo-se a si mesmas numa posição neutra. Curiosamente, os que demonstram atitudes mais negativas face a interagirem com robĂ´s, reportam mais desonestidade. Estas sĂŁo considerações importantes para o desenvolvimento de robĂ´s para colaborarem com humanos no futuro