72 research outputs found

    Using social robots to encourage honest behaviours

    Get PDF
    This thesis presents a series of studies to understand if robots can promote more honest behaviours from people, when they are tempted to behave dishonestly. In Study 1 we see that a robot just presenting gaze behaviour inhibits cheating, but a robot doing small talk, does not. In Study 2 we see that participants cheated to an equal extent when doing the task in their homes alone or with a video of a robot looking at them. In Study 3 we find that including situation awareness in a robot (showing awareness of the participant behaviour), decreased cheating across the game. In Study 4 we see that priming participants for their relational self-concept does not enhance the situation awareness effect on cheating. In study 5 and 6 we explore participants perceptions, and we see that people consider it wrong to be dishonest towards a robot. However, they would feel low levels of guilt and justify it by the robots’ lack of capabilities, presence, and a human tendency for dishonesty. When prompted to evaluate what other’s/or their own attitudes would be regarding dishonesty, manipulating the caring behaviour of a robot, it shows no effect and people in general think others would be dishonest and hold themselves in a more neutral stance. Interestingly, people that show more negative attitudes towards robots tend to report that others will act more dishonestly as well as themselves. These are important considerations for the development of robots, in the future, to work alongside with humans.Esta tese apresenta uma série de estudos para perceber se os robôs podem promover comportamentos honestos nas pessoas. No Estudo 1 observa-se que um robô que apenas olha para o utilizador, inibe batota, mas um robô que apresenta algum comportamento verbal não tem o mesmo efeito. No estudo 2, vemos que os participantes fazem batota tanto sozinhos, nas suas casas, como na presença de um vídeo de um robô que simplesmente olha. No Estudo 3 incluindo no robô a capacidade de perceber as jogadas dos participantes e reagir a elas, diminui a batota ao longo do jogo. No Estudo 4 a inclusão de um priming para o auto-conceito relacional não aumenta o efeito encontrado no Estudo 3. Finalmente, no Estudo 5 e 6 exploram-se as perceções das pessoas, e verifica-se que consideram errado ser-se desonesto com um robô, mas reportando baixos níveis de culpa. Justificam a desonestidade por: falta de capacidades no robô, falta de presença e a existência de uma tendência humana para a desonestidade. Quando avaliadas as atitudes que os outros teriam ou eles próprios em ser-se desonesto, manipulando o carácter afetivo do robô, não existem efeitos e as pessoas no geral reportam que os outros serão desonestos mantendo-se a si mesmas numa posição neutra. Curiosamente, os que demonstram atitudes mais negativas face a interagirem com robôs, reportam mais desonestidade. Estas são considerações importantes para o desenvolvimento de robôs para colaborarem com humanos no futuro

    Cheating with robots: How at ease do they make us feel?

    Get PDF
    People are not perfect, and if given the chance, some will be dishonest with no regrets. Some people will cheat just a little to gain some advantage, and others will not do it at all. With the prospect of more human-robot interactions in the future, it will become very important to understand which kind of roles a robot can have in the regulation of cheating behavior. We investigated whether people will cheat while in the presence of a robot and to what extent this depends on the role the robot plays. We ran a study to test cheating behavior with a die task, and allocated people to one of the following conditions: 1) participants were alone in the room while doing the task; 2) with a robot with a vigilant role or 3) with a robot that had a supporting role in the task, accompanying and giving instructions. Our results showed that participants cheated significantly more than chance when they were alone or with the robot giving instructions. In contrast, cheating could not be proven when the robot presented a vigilant role. This study has implications for human-robot interaction and for the deployment of autonomous robots in sensitive roles in which people may be prone to dishonest behavior.info:eu-repo/semantics/acceptedVersio

    Headspace solid-phase Microextraction of Volatile and furanic Compounds in Coated Fish Sticks: Effect of the Extraction Temperature

    Get PDF
    This work evaluated the effect of temperature onheadspace solid-phase microextraction of volatile and furaniccompounds in coated fish sticks. The major goal was the analysis ofthe samples as consumed, to reproduce volatile compounds peoplefeel when consuming those products. Extraction at 37 ºC (the humanbody temperature) throughout the HS-SPME analysis of volatile andfuranic compounds in coated fish was compared with higherextraction temperatures, which are frequently used for this kind ofdeterminations. The profile of volatile compounds found in deepfried(F) and non-fried (NF) coated fish at 37 and 50 ºC was differentfrom that obtained at 80 ºC. Concerning furan and its derivatives, anextra formation of these compounds was observed at higherextraction temperatures. The analysis of volatile and furaniccompounds in fish coated sticks simulating the cooking and eatingconditions can be reliably carried out setting the headspaceabsorption temperature at 37 ºC

    La glomérulo-sclérose rénale chez le chien

    Get PDF

    Towards more humane machines: creating emotional social robots

    Get PDF
    Robots are now widely used in industrial settings, and today the world has woken up to the impact that they will have in our society. But robots have been limited to repetitive, industrial tasks. However, recent platforms are becoming more secure to operate amongst humans, and research in Human-Robot Interaction (HRI) is preparing robots for use in schools, public services and eventually everyone’s home. If we aim for a robot flexible enough to work around humans and decide autonomously how to act in complex situations, a notion of morality is needed for their decision making. In this chapter we argue that we can achieve some level of moral decision making in social robots if they are endowed with empathy capabilities. We then discuss how to build artificial empathy in robots, giving some concrete examples of how these implementations can guide the path to creating moral social robots in the future.info:eu-repo/semantics/acceptedVersio
    • …
    corecore