81 research outputs found

    Attractive, Informative, and Communicative Robot System on Guide Plate as an Attendant with Awareness of User’s Gaze

    Get PDF
    In this paper, we introduce an interactive guide plate system by adopting a gaze-communicative stuffed-toy robot and a gaze-interactive display board. An attached stuffed-toy robot on the system naturally show anthropomorphic guidance corresponding to the user’s gaze orientation. The guidance is presented through gaze-communicative behaviors of the stuffed-toy robot using joint attention and eye-contact reactions to virtually express its own mind in conjunction with b) vocal guidance and c) projection on the guide plate. We adopted our image-based remote gaze-tracking method to detect the user’s gazing orientation. The results from both empirical studies by subjective / objective evaluations and observations of our demonstration experiments in a semipublic space show i) the total operation of the system, ii) the elicitation of user’s interest by gaze behaviors of the robot, and iii) the effectiveness of the gaze-communicative guide adopting the anthropomorphic robot

    Humanoid-based protocols to study social cognition

    Get PDF
    Social cognition is broadly defined as the way humans understand and process their interactions with other humans. In recent years, humans have become more and more used to interact with non-human agents, such as technological artifacts. Although these interactions have been restricted to human-controlled artifacts, they will soon include interactions with embodied and autonomous mechanical agents, i.e., robots. This challenge has motivated an area of research related to the investigation of human reactions towards robots, widely referred to as Human-Robot Interaction (HRI). Classical HRI protocols often rely on explicit measures, e.g., subjective reports. Therefore, they cannot address the quantification of the crucial implicit social cognitive processes that are evoked during an interaction. This thesis aims to develop a link between cognitive neuroscience and human-robot interaction (HRI) to study social cognition. This approach overcomes methodological constraints of both fields, allowing to trigger and capture the mechanisms of real-life social interactions while ensuring high experimental control. The present PhD work demonstrates this through the systematic study of the effect of online eye contact on gaze-mediated orienting of attention. The study presented in Publication I aims to adapt the gaze-cueing paradigm from cognitive science to an objective neuroscientific HRI protocol. Furthermore, it investigates whether the gaze-mediated orienting of attention is sensitive to the establishment of eye contact. The study replicates classic screen-based findings of attentional orienting mediated by gaze both at behavioral and neural levels, highlighting the feasibility and the scientific value of adding neuroscientific methods to HRI protocols. The aim of the study presented in Publication II is to examine whether and how real-time eye contact affects the dual-component model of joint attention orienting. To this end, cue validity and stimulus-to-onset asynchrony are also manipulated. The results show an interactive effect of strategic (cue validity) and social (eye contact) top-down components on the botton-up reflexive component of gaze-mediated orienting of attention. The study presented in Publication III aims to examine the subjective engagement and attribution of human likeness towards the robot depending on established eye contact or not during a joint attention task. Subjective reports show that eye contact increases human likeness attribution and feelings of engagement with the robot compared to a no-eye contact condition. The aim of the study presented in Publication IV is to investigate whether eye contact established by a humanoid robot affects objective measures of engagement (i.e. joint attention and fixation durations), and subjective feelings of engagement with the robot during a joint attention task. Results show that eye contact modulates attentional engagement, with longer fixations at the robot’s face and cueing effect when the robot establishes eye contact. In contrast, subjective reports show that the feeling of being engaged with the robot in an HRI protocol is not modulated by real-time eye contact. This study further supports the necessity for adding objective methods to HRI. Overall, this PhD work shows that embodied artificial agents can advance the theoretical knowledge of social cognitive mechanisms by serving as sophisticated interactive stimuli of high ecological validity and excellent experimental control. Moreover, humanoid-based protocols grounded in cognitive science can advance the HRI community by informing about the exact cognitive mechanisms that are present during HRI

    Development of the huggable social robot Probo: on the conceptual design and software architecture

    Get PDF
    This dissertation presents the development of a huggable social robot named Probo. Probo embodies a stuffed imaginary animal, providing a soft touch and a huggable appearance. Probo's purpose is to serve as a multidisciplinary research platform for human-robot interaction focused on children. In terms of a social robot, Probo is classified as a social interface supporting non-verbal communication. Probo's social skills are thereby limited to a reactive level. To close the gap with higher levels of interaction, an innovative system for shared control with a human operator is introduced. The software architecture de nes a modular structure to incorporate all systems into a single control center. This control center is accompanied with a 3D virtual model of Probo, simulating all motions of the robot and providing a visual feedback to the operator. Additionally, the model allows us to advance on user-testing and evaluation of newly designed systems. The robot reacts on basic input stimuli that it perceives during interaction. The input stimuli, that can be referred to as low-level perceptions, are derived from vision analysis, audio analysis, touch analysis and object identification. The stimuli will influence the attention and homeostatic system, used to de ne the robot's point of attention, current emotional state and corresponding facial expression. The recognition of these facial expressions has been evaluated in various user-studies. To evaluate the collaboration of the software components, a social interactive game for children, Probogotchi, has been developed. To facilitate interaction with children, Probo has an identity and corresponding history. Safety is ensured through Probo's soft embodiment and intrinsic safe actuation systems. To convey the illusion of life in a robotic creature, tools for the creation and management of motion sequences are put into the hands of the operator. All motions generated from operator triggered systems are combined with the motions originating from the autonomous reactive systems. The resulting motion is subsequently smoothened and transmitted to the actuation systems. With future applications to come, Probo is an ideal platform to create a friendly companion for hospitalised children

    Development of duplex eye contact framework for human-robot inter communication

    Get PDF
    Funding Information: This work was supported in part by the National Research Foundation of Korea-Grant funded by the Korean Government (Ministry of Science and ICT) under Grant NRF 2020R1A2B5B02002478, in part by the Sejong University through its Faculty Research Program, and in part by the Directorate of Research and Extension (DRE), Chittagong University of Engineering and Technology.Peer reviewedPublisher PD

    A Proactive Approach of Robotic Framework for Making Eye Contact with Humans

    Get PDF
    Making eye contact is a most important prerequisite function of humans to initiate a conversation with others. However, it is not an easy task for a robot to make eye contact with a human if they are not facing each other initially or the human is intensely engaged his/her task. If the robot would like to start communication with a particular person, it should turn its gaze to that person and make eye contact with him/her. However, such a turning action alone is not enough to set up an eye contact phenomenon in all cases. Therefore, the robot should perform some stronger actions in some situations so that it can attract the target person before meeting his/her gaze. In this paper, we proposed a conceptual model of eye contact for social robots consisting of two phases: capturing attention and ensuring the attention capture. Evaluation experiments with human participants reveal the effectiveness of the proposed model in four viewing situations, namely, central field of view, near peripheral field of view, far peripheral field of view, and out of field of view

    Gaze-based, context-aware robotic system for assisted reaching and grasping

    Get PDF
    Assistive robotic systems endeavour to support those with movement disabilities, enabling them to move again and regain functionality. Main issue with these systems is the complexity of their low-level control, and how to translate this to simpler, higher level commands that are easy and intuitive for a human user to interact with. We have created a multi-modal system, consisting of different sensing, decision making and actuating modalities, leading to intuitive, human-in-the-loop assistive robotics. The system takes its cue from the user's gaze, to decode their intentions and implement low-level motion actions to achieve high-level tasks. This results in the user simply having to look at the objects of interest, for the robotic system to assist them in reaching for those objects, grasping them, and using them to interact with other objects. We present our method for 3D gaze estimation, and grammars-based implementation of sequences of action with the robotic system. The 3D gaze estimation is evaluated with 8 subjects, showing an overall accuracy of 4.68±0.14cm4.68\pm0.14cm. The full system is tested with 5 subjects, showing successful implementation of 100%100\% of reach to gaze point actions and full implementation of pick and place tasks in 96\%, and pick and pour tasks in 76%76\% of cases. Finally we present a discussion on our results and what future work is needed to improve the system

    Gaze-based interaction for effective tutoring with social robots

    Get PDF

    Gaze-based interaction for effective tutoring with social robots

    Get PDF
    corecore