96 research outputs found

    Form, Function and Etiquette – Potential Users’ Perspectives on Social Domestic Robots

    Get PDF
    Social Domestic Robots (SDRs) will soon be launched en masse among commercial markets. Previously, social robots only inhabited scientific labs; now there is an opportunity to conduct experiments to investigate human-robot relationships (including user expectations of social interaction) within more naturalistic, domestic spaces, as well as to test models of technology acceptance. To this end we exposed 20 participants to advertisements prepared by three robotics companies, explaining and “pitching” their SDRs’ functionality (namely, Pepper by SoftBank; Jibo by Jibo, Inc.; and Buddy by Blue Frog Robotics). Participants were interviewed and the data was thematically analyzed to critically examine their initial reactions, concerns and impressions of the three SDRs. Using this approach, we aim to complement existing survey results pertaining to SDRs, and to try to understand the reasoning people use when evaluating SDRs based on what is publicly available to them, namely, advertising. Herein, we unpack issues raised concerning form/function, security/privacy, and the perceived emotional impact of owning an SDR. We discuss implications for the adequate design of socially engaged robotics for domestic applications, and provide four practical steps that could improve the relationships between people and SDRs. An additional contribution is made by expanding existing models of technology acceptance in domestic settings with a new factor of privacy

    Developing an Affect-Aware Rear-Projected Robotic Agent

    Get PDF
    Social (or Sociable) robots are designed to interact with people in a natural and interpersonal manner. They are becoming an integrated part of our daily lives and have achieved positive outcomes in several applications such as education, health care, quality of life, entertainment, etc. Despite significant progress towards the development of realistic social robotic agents, a number of problems remain to be solved. First, current social robots either lack enough ability to have deep social interaction with human, or they are very expensive to build and maintain. Second, current social robots have yet to reach the full emotional and social capabilities necessary for rich and robust interaction with human beings. To address these problems, this dissertation presents the development of a low-cost, flexible, affect-aware rear-projected robotic agent (called ExpressionBot), that is designed to support verbal and non-verbal communication between the robot and humans, with the goal of closely modeling the dynamics of natural face-to-face communication. The developed robotic platform uses state-of-the-art character animation technologies to create an animated human face (aka avatar) that is capable of showing facial expressions, realistic eye movement, and accurate visual speech, and then project this avatar onto a face-shaped translucent mask. The mask and the projector are then rigged onto a neck mechanism that can move like a human head. Since an animation is projected onto a mask, the robotic face is highly flexible research tool, mechanically simple, and low-cost to design, build and maintain compared with mechatronic and android faces. The results of our comprehensive Human-Robot Interaction (HRI) studies illustrate the benefits and values of the proposed rear-projected robotic platform over a virtual-agent with the same animation displayed on a 2D computer screen. The results indicate that ExpressionBot is well accepted by users, with some advantages in expressing facial expressions more accurately and perceiving mutual eye gaze contact. To improve social capabilities of the robot and create an expressive and empathic social agent (affect-aware) which is capable of interpreting users\u27 emotional facial expressions, we developed a new Deep Neural Networks (DNN) architecture for Facial Expression Recognition (FER). The proposed DNN was initially trained on seven well-known publicly available databases, and obtained significantly better than, or comparable to, traditional convolutional neural networks or other state-of-the-art methods in both accuracy and learning time. Since the performance of the automated FER system highly depends on its training data, and the eventual goal of the proposed robotic platform is to interact with users in an uncontrolled environment, a database of facial expressions in the wild (called AffectNet) was created by querying emotion-related keywords from different search engines. AffectNet contains more than 1M images with faces and 440,000 manually annotated images with facial expressions, valence, and arousal. Two DNNs were trained on AffectNet to classify the facial expression images and predict the value of valence and arousal. Various evaluation metrics show that our deep neural network approaches trained on AffectNet can perform better than conventional machine learning methods and available off-the-shelf FER systems. We then integrated this automated FER system into spoken dialog of our robotic platform to extend and enrich the capabilities of ExpressionBot beyond spoken dialog and create an affect-aware robotic agent that can measure and infer users\u27 affect and cognition. Three social/interaction aspects (task engagement, being empathic, and likability of the robot) are measured in an experiment with the affect-aware robotic agent. The results indicate that users rated our affect-aware agent as empathic and likable as a robot in which user\u27s affect is recognized by a human (WoZ). In summary, this dissertation presents the development and HRI studies of a perceptive, and expressive, conversational, rear-projected, life-like robotic agent (aka ExpressionBot or Ryan) that models natural face-to-face communication between human and emapthic agent. The results of our in-depth human-robot-interaction studies show that this robotic agent can serve as a model for creating the next generation of empathic social robots

    Telethrone : a situated display using retro-reflection basedmulti-view toward remote collaboration in small dynamic groups

    Get PDF
    This research identifies a gap in the tele-communication technology. Several novel technology demonstrators are tested experimentally throughout the research. The presented final system allows a remote participant in a conversation to unambiguously address individual members of a group of 5 people using non-verbal cues. The capability to link less formal groups through technology is the primary contribution. Technology-mediated communication is first reviewed, with attention to different supported styles of meetings. A gap is identified for small informal groups. Small dynamic groups which are convened on demand for the solution of specific problems may be called “ad-hoc”. In these meetings it is possible to ‘pull up a chair’. This is poorly supported by current tele-communication tools, that is, it is difficult for one or more members to join such a meeting from a remote location. It is also difficult for physically located parties to reorient themselves in the meeting as goals evolve. As the major contribution toward addressing this the ’Telethrone’ is introduced. Telethrone projects a remote user onto a chair, bringing them into your space. The chair seems to act as a situated display, which can support multi party head gaze, eye gaze, and body torque. Each observer knows where the projected user is looking. It is simpler to implement and cheaper than current comparable systems. The underpinning approach is technology and systems development, with regard to HCI and psychology throughout. Prototypes, refinements, and novel engineered systems are presented. Two experiments to test these systems are peer-reviewed, and further design & experimentation undertaken based on the positive results. The final paper is pending. An initial version of the new technology approach combined retro-reflective material with aligned pairs of cameras, and projectors, connected by IP video. A counterbalanced repeated measures experiment to analyse gaze interactions was undertaken. Results suggest that the remote user is not excluded from triadic poker game-play. Analysis of the multi-view aspect of the system was inconclusive as to whether it shows advantage over a set-up which does not support multi-view. User impressions from the questionnaires suggest that the current implementation still gives the impression of being a display despite its situated nature, although participants did feel the remote user was in the space with them. A refinement of the system using models generated by visual hull reconstruction can better connect eye gaze. An exploration is made of its ability to allow chairs to be moved around the meeting, and what this might enable for the participants of the meeting. The ability to move furniture was earlier identified as an aid to natural interaction, but may also affect highly correlated subgroups in an ad-hoc meeting. This is unsupported by current technologies. Repositioning of several onlooking chairs seems to support ’fault lines’. Performance constraints of the current system are explored. An experiment tests whether it is possible to judge remote participant eye gaze as the viewer changes location, attempting to address concerns raised by the first experiment in which the physical offsets of the IP cameras lenses from the projected eyes of the remote participants (in both directions), may have influenced perception of attention. A third experiment shows that five participants viewing a remote recording, presented through the Telethrone, can judge the attention of the remote participant accurately when the viewpoint is correctly rendered for their location in the room. This is compared to a control in which spatial discrimination is impossible. A figure for how many optically seperate retro-reflected segments is obtained through spatial anlysis and testing. It is possible to render the optical maximum of 5 independent viewpoints supporting an ’ideal’ meeting of 6 people. The tested system uses one computer at the meeting side of the exchange making it potentially deployable from a small flight case. The thesis presents and tests the utility of elements toward a system, and finds that remote users are in the conversation, spatially segmented with a view for each onlooker, that eye gaze can be reconnected through the system using 3D video, and that performance supports scalability up to the theoretical maximum for the material and an ideal meeting size

    Design and semantics of form and movement

    Get PDF
    Contemporary cognitive science and neuroscience offer us some rather precise insights into the mechanisms that are responsible for certain body movements. In this paper, we argue that this knowledge may be highly relevant to the design of meaningful movement and behavior, both from a theoretical and practical point of view. Taking the example of a leech, we investigate and identify the basic principles of "embodied movement" that govern the motion of this simple creature, and argue that the development and adoption of a design methodology that incorporates these principles right from the start, may be the best way forward, if one wants to realize and design movements with certain desirable characteristics

    A Retro-Projected Robotic Head for Social Human-Robot Interaction

    Get PDF
    As people respond strongly to faces and facial features, both con- sciously and subconsciously, faces are an essential aspect of social robots. Robotic faces and heads until recently belonged to one of the following categories: virtual, mechatronic or animatronic. As an orig- inal contribution to the field of human-robot interaction, I present the R-PAF technology (Retro-Projected Animated Faces): a novel robotic head displaying a real-time, computer-rendered face, retro-projected from within the head volume onto a mask, as well as its driving soft- ware designed with openness and portability to other hybrid robotic platforms in mind. The work constitutes the first implementation of a non-planar mask suitable for social human-robot interaction, comprising key elements of social interaction such as precise gaze direction control, facial ex- pressions and blushing, and the first demonstration of an interactive video-animated facial mask mounted on a 5-axis robotic arm. The LightHead robot, a R-PAF demonstrator and experimental platform, has demonstrated robustness both in extended controlled and uncon- trolled settings. The iterative hardware and facial design, details of the three-layered software architecture and tools, the implementation of life-like facial behaviours, as well as improvements in social-emotional robotic communication are reported. Furthermore, a series of evalua- tions present the first study on human performance in reading robotic gaze and another first on user’s ethnic preference towards a robot face

    The 1988 Goddard Conference on Space Applications of Artificial Intelligence

    Get PDF
    This publication comprises the papers presented at the 1988 Goddard Conference on Space Applications of Artificial Intelligence held at the NASA/Goddard Space Flight Center, Greenbelt, Maryland on May 24, 1988. The purpose of this annual conference is to provide a forum in which current research and development directed at space applications of artificial intelligence can be presented and discussed. The papers in these proceedings fall into the following areas: mission operations support, planning and scheduling; fault isolation/diagnosis; image processing and machine vision; data management; modeling and simulation; and development tools/methodologies

    Information Outlook, September 2000

    Get PDF
    Volume 4, Issue 9https://scholarworks.sjsu.edu/sla_io_2000/1008/thumbnail.jp

    Sustaining Emotional Communication when Interacting with an Android Robot

    Get PDF
    • 

    corecore