1,427 research outputs found

    Believing in BERT:Using expressive communication to enhance trust and counteract operational error in physical Human-robot interaction

    Get PDF
    Strategies are necessary to mitigate the impact of unexpected behavior in collaborative robotics, and research to develop solutions is lacking. Our aim here was to explore the benefits of an affective interaction, as opposed to a more efficient, less error prone but non-communicative one. The experiment took the form of an omelet-making task, with a wide range of participants interacting directly with BERT2, a humanoid robot assistant. Having significant implications for design, results suggest that efficiency is not the most important aspect of performance for users; a personable, expressive robot was found to be preferable over a more efficient one, despite a considerable trade off in time taken to perform the task. Our findings also suggest that a robot exhibiting human-like characteristics may make users reluctant to 'hurt its feelings'; they may even lie in order to avoid this.Comment: 8 pages, 4 figure

    Do You Feel Me?: Learning Language from Humans with Robot Emotional Displays

    Get PDF
    In working towards accomplishing a human-level acquisition and understanding of language, a robot must meet two requirements: the ability to learn words from interactions with its physical environment, and the ability to learn language from people in settings for language use, such as spoken dialogue. The second requirement poses a problem: If a robot is capable of asking a human teacher well-formed questions, it will lead the teacher to provide responses that are too advanced for a robot, which requires simple inputs and feedback to build word-level comprehension. In a live interactive study, we tested the hypothesis that emotional displays are a viable solution to this problem of how to communicate without relying on language the robot doesn\u27t--indeed, cannot--actually know. Emotional displays can relate the robot\u27s state of understanding to its human teacher, and are developmentally appropriate for the most common language acquisition setting: an adult interacting with a child. For our study, we programmed a robot to independently explore the world and elicit relevant word references and feedback from the participants who are confronted with two robot settings: a setting in which the robot displays emotions, and a second setting where the robot focuses on the task without displaying emotions, which also tests if emotional displays lead a participant to make incorrect assumptions regarding the robot\u27s understanding. Analyzing the results from the surveys and the Grounded Semantics classifiers, we discovered that the use of emotional displays increases the number of inputs provided to the robot, an effect that\u27s modulated by the ratio of positive to negative emotions that were displayed

    Can a Humanoid Face be Expressive? A Psychophysiological Investigation

    Get PDF
    Non-verbal signals expressed through body language play a crucial role in multi-modal human communication during social relations. Indeed, in all cultures, facial expressions are the most universal and direct signs to express innate emotional cues. A human face conveys important information in social interactions and helps us to better understand our social partners and establish empathic links. Latest researches show that humanoid and social robots are becoming increasingly similar to humans, both esthetically and expressively. However, their visual expressiveness is a crucial issue that must be improved to make these robots more realistic and intuitively perceivable by humans as not different from them. This study concerns the capability of a humanoid robot to exhibit emotions through facial expressions. More specifically, emotional signs performed by a humanoid robot have been compared with corresponding human facial expressions in terms of recognition rate and response time. The set of stimuli included standardized human expressions taken from an Ekman-based database and the same facial expressions performed by the robot. Furthermore, participants’ psychophysiological responses have been explored to investigate whether there could be differences induced by interpreting robot or human emotional stimuli. Preliminary results show a trend to better recognize expressions performed by the robot than 2D photos or 3D models. Moreover, no significant differences in the subjects’ psychophysiological state have been found during the discrimination of facial expressions performed by the robot in comparison with the same task performed with 2D photos and 3D models

    “Robot, tell me a tale!”: A Social Robot as tool for Teachers in Kindergarten

    Get PDF
    Robots are versatile devices that are promising tools for supporting teaching and learning in the classroom or at home. In fact, robots can be engaging and motivating, especially for young children. This paper presents an experimental study with 81 kindergarten children on memorizations of two tales narrated by a humanoid robot. Variables of the study are the content of the tales (knowledge or emotional) and the different social behaviour of the narrators: static human, static robot, expressive human, and expressive robot. Results suggest a positive effect of the expressive behaviour in robot storytelling, whose effectiveness is comparable to a human with the same behaviour and better when compared with a static inexpressive human. Higher efficacy is achieved by the robot in the tale with knowledge content, while the limited capability to express emotions made the robot less effective in the tale with emotional content

    Assistive Technology to Improve Collaboration in Children with ASD: State-of-the-Art and Future Challenges in the Smart Products Sector

    Get PDF
    Within the field of products for autism spectrum disorder, one of the main research areas is focused on the development of assistive technology. Mid and high-tech products integrate interactive and smart functions with multisensory reinforcements, making the user experience more intuitive, adaptable, and dynamic. These products have a very significant impact on improving the skills of children with autism, including collaboration and social skills, which are essential for the integration of these children into society and, therefore, their well-being. This work carried out an exhaustive analysis of the scientific literature, as well as market research and trends, and patent analysis to explore the state-of-the-art of assistive technology and smart products for children with ASD, specifically those aimed at improving social and communication skills. The results show a reduced availability of products that act as facilitators of the special needs of children with ASD, which is even more evident for products aimed at improving collaboration skills. Products that allow the participation of several users simultaneously through multi-user interfaces are required. On top of this, the trend toward virtual environments is leading to a loss of material aspects in the design that are essential for the development of these children

    Look me in the eyes: A survey of eye and gaze animation for virtual agents and artificial systems

    Get PDF
    International audienceA person's emotions and state of mind are apparent in their face and eyes. As a Latin proverb states: "The face is the portrait of the mind; the eyes, its informers.". This presents a huge challenge for computer graphics researchers in the generation of artificial entities that aim to replicate the movement and appearance of the human eye, which is so important in human-human interactions. This State of the Art Report provides an overview of the efforts made on tackling this challenging task. As with many topics in Computer Graphics, a cross-disciplinary approach is required to fully understand the workings of the eye in the transmission of information to the user. We discuss the movement of the eyeballs, eyelids, and the head from a physiological perspective and how these movements can be modelled, rendered and animated in computer graphics applications. Further, we present recent research from psychology and sociology that seeks to understand higher level behaviours, such as attention and eye-gaze, during the expression of emotion or during conversation, and how they are synthesised in Computer Graphics and Robotics

    How Certain Robot Attributes Influence Human-to-Robot Social and Emotional Bonds

    Get PDF
    A growing population of humans are feeling lonely and isolated and may therefore benefit from social and emotional companionship. However, other humans cannot always be available to fulfill these needs, and such in-need individuals often cannot care for pets. Therefore, we explore how robot companions may be designed to facilitate bonds with humans. Our preliminary examination of 115 participants in a quasi-experimental study suggests that humans are more likely to develop social and emotional bonds with robots when those robots are good at communicating and conveying emotions. However, robots’ anthropomorphic attributes and responsiveness to external cues were found to have no impact on bond formulation

    Art and Medicine: A Collaborative Project Between Virginia Commonwealth University in Qatar and Weill Cornell Medicine in Qatar

    Get PDF
    Four faculty researchers, two from Virginia Commonwealth University in Qatar, and two from Weill Cornell Medicine in Qatar developed a one semester workshop-based course in Qatar exploring the connections between art and medicine in a contemporary context. Students (6 art / 6 medicine) were enrolled in the course. The course included presentations by clinicians, medical engineers, artists, computing engineers, an art historian, a graphic designer, a painter, and other experts from the fields of art, design, and medicine. To measure the student experience of interdisciplinarity, the faculty researchers employed a mixed methods approach involving psychometric tests and observational ethnography. Data instruments included pre- and post-course semi-structured audio interviews, pre-test / post-test psychometric instruments (Budner Scale and Torrance Tests of Creativity), observational field notes, self-reflective blogging, and videography. This book describes the course and the experience of the students. It also contains images of the interdisciplinary work they created for a culminating class exhibition. Finally, the book provides insight on how different fields in a Middle Eastern context can share critical /analytical thinking tools to refine their own professional practices

    Development of the huggable social robot Probo: on the conceptual design and software architecture

    Get PDF
    This dissertation presents the development of a huggable social robot named Probo. Probo embodies a stuffed imaginary animal, providing a soft touch and a huggable appearance. Probo's purpose is to serve as a multidisciplinary research platform for human-robot interaction focused on children. In terms of a social robot, Probo is classified as a social interface supporting non-verbal communication. Probo's social skills are thereby limited to a reactive level. To close the gap with higher levels of interaction, an innovative system for shared control with a human operator is introduced. The software architecture de nes a modular structure to incorporate all systems into a single control center. This control center is accompanied with a 3D virtual model of Probo, simulating all motions of the robot and providing a visual feedback to the operator. Additionally, the model allows us to advance on user-testing and evaluation of newly designed systems. The robot reacts on basic input stimuli that it perceives during interaction. The input stimuli, that can be referred to as low-level perceptions, are derived from vision analysis, audio analysis, touch analysis and object identification. The stimuli will influence the attention and homeostatic system, used to de ne the robot's point of attention, current emotional state and corresponding facial expression. The recognition of these facial expressions has been evaluated in various user-studies. To evaluate the collaboration of the software components, a social interactive game for children, Probogotchi, has been developed. To facilitate interaction with children, Probo has an identity and corresponding history. Safety is ensured through Probo's soft embodiment and intrinsic safe actuation systems. To convey the illusion of life in a robotic creature, tools for the creation and management of motion sequences are put into the hands of the operator. All motions generated from operator triggered systems are combined with the motions originating from the autonomous reactive systems. The resulting motion is subsequently smoothened and transmitted to the actuation systems. With future applications to come, Probo is an ideal platform to create a friendly companion for hospitalised children
    corecore