27 research outputs found

    The Public’s Perception of Humanlike Robots: Online Social Commentary Reflects an Appearance-Based Uncanny Valley, a General Fear of a “Technology Takeover”, and the Unabashed Sexualization of Female-Gendered Robots

    Get PDF
    Towards understanding the public’s perception of humanlike robots, we examined commentary on 24 YouTube videos depicting social robots ranging in human similarity – from Honda’s Asimo to Hiroshi Ishiguro’s Geminoids. In particular, we investigated how people have responded to the emergence of highly humanlike robots (e.g., Bina48) in contrast to those with more prototypically-“robotic” appearances (e.g., Asimo), coding the frequency at which the uncanny valley versus fears of replacement and/or a “technology takeover” arise in online discourse based on the robot’s appearance. Here we found that, consistent with Masahiro Mori’s theory of the uncanny valley, people’s commentary reflected an aversion to highly humanlike robots. Correspondingly, the frequency of uncanny valley-related commentary was significantly higher in response to highly humanlike robots relative to those of more prototypical appearances. Independent of the robots’ human similarity, we further observed a moderate correlation to exist between people’s explicit fears of a “technology takeover” and their emotional responding towards robots. Finally, through the course of our investigation, we encountered a third and rather disturbing trend – namely, the unabashed sexualization of female-gendered robots. In exploring the frequency at which this sexualization manifests in the online commentary, we found it to exceed that of both the uncanny valley and fears of robot sentience/replacement combined. In sum, these findings help to shed light on the relevance of the uncanny valley “in the wild” and further, they help situate it with respect to other design challenges for HRI

    A Differential Approach for Gaze Estimation

    Full text link
    Non-invasive gaze estimation methods usually regress gaze directions directly from a single face or eye image. However, due to important variabilities in eye shapes and inner eye structures amongst individuals, universal models obtain limited accuracies and their output usually exhibit high variance as well as biases which are subject dependent. Therefore, increasing accuracy is usually done through calibration, allowing gaze predictions for a subject to be mapped to his/her actual gaze. In this paper, we introduce a novel image differential method for gaze estimation. We propose to directly train a differential convolutional neural network to predict the gaze differences between two eye input images of the same subject. Then, given a set of subject specific calibration images, we can use the inferred differences to predict the gaze direction of a novel eye sample. The assumption is that by allowing the comparison between two eye images, annoyance factors (alignment, eyelid closing, illumination perturbations) which usually plague single image prediction methods can be much reduced, allowing better prediction altogether. Experiments on 3 public datasets validate our approach which constantly outperforms state-of-the-art methods even when using only one calibration sample or when the latter methods are followed by subject specific gaze adaptation.Comment: Extension to our paper A differential approach for gaze estimation with calibration (BMVC 2018) Submitted to PAMI on Aug. 7th, 2018 Accepted by PAMI short on Dec. 2019, in IEEE Transactions on Pattern Analysis and Machine Intelligenc

    Keep on Moving! Exploring Anthropomorphic Effects of Motion during Idle Moments

    Get PDF
    In this paper, we explored the effect of a robot’s subconscious gestures made during moments when idle (also called adaptor gestures) on anthropomorphic perceptions of five year old children. We developed and sorted a set of adaptor motions based on their intensity. We designed an experiment involving 20 children, in which they played a memory game with two robots. During moments of idleness, the first robot showed adaptor movements, while the second robot moved its head following basic face tracking. Results showed that the children perceived the robot displaying adaptor movements to be more human and friendly. Moreover, these traits were found to be proportional to the intensity of the adaptor movements. For the range of intensities tested, it was also found that adaptor movements were not disruptive towards the task. These findings corroborate the fact that adaptor movements improve the affective aspect of child-robot interactions (CRI) and do not interfere with the child’s performances in the task, making them suitable for CRI in educational contexts

    Gaze aversion in conversational settings: An investigation based on mock job interview

    Get PDF
    We report the results of an empirical study on gaze aversion during dyadic human-to-human conversation in an interview setting. To address various methodological challenges in as- sessing gaze-to-face contact, we followed an approach where the experiment was conducted twice, each time with a different set of interviewees. In one of them the interviewer’s gaze was tracked with an eye tracker, and in the other the interviewee’s gaze was tracked. The gaze sequences obtained in both experiments were analyzed and modeled as Discrete-Time Markov Chains. The results show that the interviewer made more frequent and longer gaze contacts compared to the interviewee. Also, the interviewer made mostly diagonal gaze aversions, whereas the interviewee made sideways aversions (left or right). We discuss the relevance of this research for Human-Robot Interaction, and discuss some future research problems

    Gaze Contingency in Turn-Taking for Human Robot Interaction: Advantages and Drawbacks

    Get PDF
    Palinko O, Sciutti A, Schillingmann L, Rea F, Nagai Y, Sandini G. Gaze Contingency in Turn-Taking for Human Robot Interaction: Advantages and Drawbacks. Presented at the 24th IEEE International Symposium on Robot and Human Interactive Communication
    corecore