38,591 research outputs found

    Social cognition in the age of human–robot interaction

    Get PDF
    Artificial intelligence advances have led to robots endowed with increasingly sophisticated social abilities. These machines speak to our innate desire to perceive social cues in the environment, as well as the promise of robots enhancing our daily lives. However, a strong mismatch still exists between our expectations and the reality of social robots. We argue that careful delineation of the neurocognitive mechanisms supporting human–robot interaction will enable us to gather insights critical for optimising social encounters between humans and robots. To achieve this, the field must incorporate human neuroscience tools including mobile neuroimaging to explore long-term, embodied human–robot interaction in situ. New analytical neuroimaging approaches will enable characterisation of social cognition representations on a finer scale using sensitive and appropriate categorical comparisons (human, animal, tool, or object). The future of social robotics is undeniably exciting, and insights from human neuroscience research will bring us closer to interacting and collaborating with socially sophisticated robots

    What are you or who are you? The emergence of social interaction between dog and Unidentified Moving Object (UMO)

    Get PDF
    Robots offer new possibilities for investigating animal social behaviour. This method enhances controllability and reproducibility of experimental techniques, and it allows also the experimental separation of the effects of bodily appearance (embodiment) and behaviour. In the present study we examined dogs’ interactive behaviour in a problem solving task (in which the dog has no access to the food) with three different social partners, two of which were robots and the third a human behaving in a robot-like manner. The Mechanical UMO (Unidentified Moving Object) and the Mechanical Human differed only in their embodiment, but showed similar behaviour toward the dog. In contrast, the Social UMO was interactive, showed contingent responsiveness and goal-directed behaviour and moved along varied routes. The dogs showed shorter looking and touching duration, but increased gaze alternation toward the Mechanical Human than to the Mechanical UMO. This suggests that dogs’ interactive behaviour may have been affected by previous experience with typical humans. We found that dogs also looked longer and showed more gaze alternations between the food and the Social UMO compared to the Mechanical UMO. These results suggest that dogs form expectations about an unfamiliar moving object within a short period of time and they recognise some social aspects of UMOs’ behaviour. This is the first evidence that interactive behaviour of a robot is important for evoking dogs’ social responsiveness

    Expectations towards the Morality of Robots: An Overview of Empirical Studies

    Get PDF
    The main objective of this paper is to discuss people’s expectations towards social robots’ moral attitudes. Conclusions are based on the results of three selected empirical studies which used stories of robots (and humans) acting in hypothetical scenarios to assess the moral acceptance of their attitudes. The analysis indicates both the differences and similarities in expectations towards the robot and human attitudes. Decisions to remove someone’s autonomy are less acceptable from robots than from humans. In certain circumstances, the protection of a human’s life is considered more morally right than the protection of the robot’s being. Robots are also more strongly expected to make utilitarian choices than human agents. However, there are situations in which people make consequentialist moral judgements when evaluating both the human and the robot decisions. Both robots and humans receive a similar overall amount of blame. Furthermore, it can be concluded that robots should protect their existence and obey people, but in some situations, they should be able to hurt a human being. Differences in results can be partially explained by the character of experimental tasks. The present findings might be of considerable use in implementing morality into robots and also in the legal evaluation of their behaviours and attitudes

    Exploring cultural factors in human-robot interaction: A matter of personality?

    Get PDF
    This paper proposes an experimental study to investigate task-dependence and cultural-background dependence of the personality trait attribution on humanoid robots. In Human-Robot Interaction, as well as in Human-Agent Interaction research, the attribution of personality traits towards intelligent agents has already been researched intensively in terms of the social similarity or complementary rule. These two rules imply that humans either tend to like others with similar personality traits or complementary personality traits more. Even though state of the art literature suggests that similarity attraction happens for virtual agents, and complementary attraction for robots, there are many contradictions in the findings. We assume that searching the explanation for personality trait attribution in the similarity and complementary rule does not take into account important contextual factors. Just like people equate certain personality types to certain professions, we expect that people may have certain personality expectations depending on the context of the task the robot carries out. Because professions have different social meaning in different national culture, we also expect that these task-dependent personality preferences differ across cultures. Therefore suggest an experiment that considers the task-context and the cultural background of users

    A sociophonetic analysis of female-sounding virtual assistants

    Get PDF
    As conversational machines (e.g., Apple\u27s Siri and Amazon\u27s Alexa) are increasingly anthropomorphized by humans and viewed as active interlocutors, it raises questions about the social information indexed by machine voices. This thesis provides a preliminary exploration of the relationship between human sociophonetics, social expectations, and conversational machine voices. An in-depth literature review (a) explores human relationships with and expectations for real and movie robots, (b) discusses the rise of conversational machines, (c) assesses the history of how female human voices have been perceived, and (d) details social-indexical properties associated with F0, vowel formants (F1 and F2), -ING pronunciation, and /s/ center of gravity in human speech. With background context in place, Siri and Alexa\u27s voices were recorded reciting various sentences and passages and analyzed for each of the aforementioned vocal features. Results suggest that sociolinguistic data from studies on human voices could inform hypotheses about how users might characterize conversational machine voices and encourage further consideration of how human and machine sociophonetics might influence each other

    Robots Are Not All the Same: Young Adults' Expectations, Attitudes, and Mental Attribution to Two Humanoid Social Robots

    Get PDF
    The human physical resemblance of humanoid social robots (HRSs) has proven to be particularly effective in interactions with humans in different contexts. In particular, two main factors affect the quality of human-robot interaction, the physical appearance and the behaviors performed by the robot. In this study, we examined the psychological effect of two HRSs, NAO and Pepper. Although some studies have shown that these two robots are very similar in terms of the human likeness, other evidence has shown some differences in their design affecting different psychological elements of the human partner. The present study aims to analyze the variability of the attributions of mental states (AMS), expectations of robotic development and negative attitudes as a function of the physical appearance of two HRSs after observing a real interaction with a human (an experimenter). For this purpose, two groups of young adults were recruited, one for the NAO

    General Attitudes Towards Robots Scale (GAToRS): A New Instrument for Social Surveys

    Get PDF
    Psychometric scales are useful tools in understanding people's attitudes towards different aspects of life. As societies develop and new technologies arise, new validated scales are needed. Robots and artificial intelligences of various kinds are about to occupy just about every niche in human society. Several tools to measure fears and anxieties about robots do exist, but there is a definite lack of tools to measure hopes and expectations for these new technologies. Here, we create and validate a novel multi-dimensional scale which measures people's attitudes towards robots, giving equal weight to positive and negative attitudes. Our scale differentiates (a) comfort and enjoyment around robots, (b) unease and anxiety around robots, (c) rational hopes about robots in general (at societal level) and (d) rational worries about robots in general (at societal level). The scale was developed by extracting items from previous scales, crowdsourcing new items, testing through 3 scale iterations by exploratory factor analysis (Ns 135, 801 and 609) and validated in its final form of the scale by confirmatory factor analysis (N: 477). We hope our scale will be a useful instrument for social scientists who wish to study human-technology relations with a validated scale in efficient and generalizable ways.Peer reviewe

    Social Roles and Baseline Proxemic Preferences for a Domestic Service Robot

    Get PDF
    © The Author(s) 2014. This article is published with open access at Springerlink.com. This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. The work described in this paper was conducted within the EU Integrated Projects LIREC (LIving with Robots and intEractive Companions, funded by the European Commission under contract numbers FP7 215554, and partly funded by the ACCOMPANY project, a part of the European Union’s Seventh Framework Programme (FP7/2007–2013) under grant agreement n287624The goal of our research is to develop socially acceptable behavior for domestic robots in a setting where a user and the robot are sharing the same physical space and interact with each other in close proximity. Specifically, our research focuses on approach distances and directions in the context of a robot handing over an object to a userPeer reviewe

    Robot Betrayal: a guide to the ethics of robotic deception

    Get PDF
    If a robot sends a deceptive signal to a human user, is this always and everywhere an unethical act, or might it sometimes be ethically desirable? Building upon previous work in robot ethics, this article tries to clarify and refine our understanding of the ethics of robotic deception. It does so by making three arguments. First, it argues that we need to distinguish between three main forms of robotic deception (external state deception; superficial state deception; and hidden state deception) in order to think clearly about its ethics. Second, it argues that the second type of deception – superficial state deception – is not best thought of as a form of deception, even though it is frequently criticised as such. And third, it argues that the third type of deception is best understood as a form of betrayal because doing so captures the unique ethical harm to which it gives rise, and justifies special ethical protections against its use
    corecore