129 research outputs found

    Detect, Bite, Slam

    Get PDF
    This paper explores the influences, ideas and motivations behind my MFA thesis exhibition. It primarily focuses on how I developed my work for the show in connection to my previous work as well as work created by other artists who explored the impacts of new media in the last decade. With the advancement of social media, digital technologies no longer have their infamous coldness. Our perceptions and the metaphors in language are all reflected onto the machines we create while in return they also shape and redefine our lives. It becomes increasingly difficult to talk about dialectics such as machine-human, virtual-real, and nature-culture. With the aid of some humor, I attempted to reflect on the marriage of these old oppositions and this paper will discuss the foundations of these ideas as well as my practice in general

    Using social robots to encourage honest behaviours

    Get PDF
    This thesis presents a series of studies to understand if robots can promote more honest behaviours from people, when they are tempted to behave dishonestly. In Study 1 we see that a robot just presenting gaze behaviour inhibits cheating, but a robot doing small talk, does not. In Study 2 we see that participants cheated to an equal extent when doing the task in their homes alone or with a video of a robot looking at them. In Study 3 we find that including situation awareness in a robot (showing awareness of the participant behaviour), decreased cheating across the game. In Study 4 we see that priming participants for their relational self-concept does not enhance the situation awareness effect on cheating. In study 5 and 6 we explore participants perceptions, and we see that people consider it wrong to be dishonest towards a robot. However, they would feel low levels of guilt and justify it by the robots’ lack of capabilities, presence, and a human tendency for dishonesty. When prompted to evaluate what other’s/or their own attitudes would be regarding dishonesty, manipulating the caring behaviour of a robot, it shows no effect and people in general think others would be dishonest and hold themselves in a more neutral stance. Interestingly, people that show more negative attitudes towards robots tend to report that others will act more dishonestly as well as themselves. These are important considerations for the development of robots, in the future, to work alongside with humans.Esta tese apresenta uma série de estudos para perceber se os robôs podem promover comportamentos honestos nas pessoas. No Estudo 1 observa-se que um robô que apenas olha para o utilizador, inibe batota, mas um robô que apresenta algum comportamento verbal não tem o mesmo efeito. No estudo 2, vemos que os participantes fazem batota tanto sozinhos, nas suas casas, como na presença de um vídeo de um robô que simplesmente olha. No Estudo 3 incluindo no robô a capacidade de perceber as jogadas dos participantes e reagir a elas, diminui a batota ao longo do jogo. No Estudo 4 a inclusão de um priming para o auto-conceito relacional não aumenta o efeito encontrado no Estudo 3. Finalmente, no Estudo 5 e 6 exploram-se as perceções das pessoas, e verifica-se que consideram errado ser-se desonesto com um robô, mas reportando baixos níveis de culpa. Justificam a desonestidade por: falta de capacidades no robô, falta de presença e a existência de uma tendência humana para a desonestidade. Quando avaliadas as atitudes que os outros teriam ou eles próprios em ser-se desonesto, manipulando o carácter afetivo do robô, não existem efeitos e as pessoas no geral reportam que os outros serão desonestos mantendo-se a si mesmas numa posição neutra. Curiosamente, os que demonstram atitudes mais negativas face a interagirem com robôs, reportam mais desonestidade. Estas são considerações importantes para o desenvolvimento de robôs para colaborarem com humanos no futuro

    Child–Robot Interaction in Education

    Get PDF
    Advances in the field of robotics in recent years have enabled the deployment of robots in a multitude of settings, and it is predicted that this will continue to increase, leading to a profound impact on society in the future. This thesis takes its starting point in educational robots; specifically the kind of robots that are designed to interact socially with children. Such robots are often modeled on humans, and made to express and/or perceive emotions, for the purpose of creating some social or emotional attachment in children. This thesis presents a research effort in which an empathic robotic tutor was developed and studied in a school setting, focusing on children’s interactions with the robot over time and across different educational scenarios. With support from the Responsible Research and Innovation Framework, this thesis furthermore sheds light on ethical dilemmas and the social desirability of implementing robots in future classrooms, seen from the eyes of teachers and students. The thesis concludes that children willingly follow instructions from a robotic tutor, and they may also develop a sense of connection with robots, treating them as social actors. However, children’s interactions with robots often break down in unconstrained classroom settings when expectations go unmet, making the potential gain of robots in education questionable. From an ethical perspective, there are many open questions regarding stakeholders’ concerns on matters of privacy, roles andresponsibility, as well as unintended consequences. These issues need to be dealt with when attempting to implement autonomous robots in education on a larger scale

    Perceiving Sociable Technology: Exploring the Role of Anthropomorphism and Agency Perception on Human-Computer Interaction (HCI)

    Get PDF
    With the arrival of personal assistants and other AI-enabled autonomous technologies, social interactions with smart devices have become a part of our daily lives. Therefore, it becomes increasingly important to understand how these social interactions emerge, and why users appear to be influenced by them. For this reason, I explore questions on what the antecedents and consequences of this phenomenon, known as anthropomorphism, are as described in the extant literature from fields ranging from information systems to social neuroscience. I critically analyze those empirical studies directly measuring anthropomorphism and those referring to it without a corresponding measurement. Through a grounded theory approach, I identify common themes and use them to develop models for the antecedents and consequences of anthropomorphism. The results suggest anthropomorphism possesses both conscious and non-conscious components with varying implications. While conscious attributions are shown to vary based on individual differences, non-conscious attributions emerge whenever a technology exhibits apparent reasoning such as through non-verbal behavior like peer-to-peer mirroring or verbal paralinguistic and backchanneling cues. Anthropomorphism has been shown to affect users’ self-perceptions, perceptions of the technology, how users interact with the technology, and the users’ performance. Examples include changes in a users’ trust on the technology, conformity effects, bonding, and displays of empathy. I argue these effects emerge from changes in users’ perceived agency, and their self- and social- identity similarly to interactions between humans. Afterwards, I critically examine current theories on anthropomorphism and present propositions about its nature based on the results of the empirical literature. Subsequently, I introduce a two-factor model of anthropomorphism that proposes how an individual anthropomorphizes a technology is dependent on how the technology was initially perceived (top-down and rational or bottom-up and automatic), and whether it exhibits a capacity for agency or experience. I propose that where a technology lays along this spectrum determines how individuals relates to it, creating shared agency effects, or changing the users’ social identity. For this reason, anthropomorphism is a powerful tool that can be leveraged to support future interactions with smart technologies
    • …
    corecore