926 research outputs found

    HUMAN ROBOT INTERACTION THROUGH SEMANTIC INTEGRATION OF MULTIPLE MODALITIES, DIALOG MANAGEMENT, AND CONTEXTS

    Get PDF
    The hypothesis for this research is that applying the Human Computer Interaction (HCI) concepts of using multiple modalities, dialog management, context, and semantics to Human Robot Interaction (HRI) will improve the performance of Instruction Based Learning (IBL) compared to only using speech. We tested the hypothesis by simulating a domestic robot that can be taught to clean a house using a multi-modal interface. We used a method of semantically integrating the inputs from multiple modalities and contexts that multiplies a confidence score for each input by a Fusion Weight, sums the products, and then uses the input with the highest product sum. We developed an algorithm for determining the Fusion Weights. We concluded that different modalities, contexts, and modes of dialog management impact human robot interaction; however, which combination is better depends on the importance of the accuracy of learning what is taught versus the succinctness of the dialog between the user and the robot

    Mixed-initiative Multirobot Control in USAR

    Get PDF

    ExGenNet: Learning to Generate Robotic Facial Expression Using Facial Expression Recognition

    Get PDF
    The ability of a robot to generate appropriate facial expressions is a key aspect of perceived sociability in human-robot interaction. Yet many existing approaches rely on the use of a set of fixed, preprogrammed joint configurations for expression generation. Automating this process provides potential advantages to scale better to different robot types and various expressions. To this end, we introduce ExGenNet, a novel deep generative approach for facial expressions on humanoid robots. ExGenNets connect a generator network to reconstruct simplified facial images from robot joint configurations with a classifier network for state-of-the-art facial expression recognition. The robots' joint configurations are optimized for various expressions by backpropagating the loss between the predicted expression and intended expression through the classification network and the generator network. To improve the transfer between human training images and images of different robots, we propose to use extracted features in the classifier as well as in the generator network. Unlike most studies on facial expression generation, ExGenNets can produce multiple configurations for each facial expression and be transferred between robots. Experimental evaluations on two robots with highly human-like faces, Alfie (Furhat Robot) and the android robot Elenoide, show that ExGenNet can successfully generate sets of joint configurations for predefined facial expressions on both robots. This ability of ExGenNet to generate realistic facial expressions was further validated in a pilot study where the majority of human subjects could accurately recognize most of the generated facial expressions on both the robots

    Interacting with Autonomous Vehicles: Learning from other Domains

    Get PDF
    The rise of evermore autonomy in vehicles and the expected introduction of self-driving cars have led to a focus on human interactions with such systems from an HCI perspective over the last years. Automotive User Interface researchers have been investigating issues such as transition control procedures, shared control, (over)trust, and overall user experience in automated vehicles. Now, it is time to open the research field of automated driving to other CHI research fields, such as Human-Robot-Interaction (HRI), aeronautics and space, conversational agents, or smart devices. These communities have been dealing with the interplay between humans and automated systems for more than 30 years. In this workshop, we aim to provide a forum to discuss what can be learnt from other domains for the design of autonomous vehicles. Interaction design problems that occur in these domains, such as transition control procedures, how to build trust in the system, and ethics will be discussed

    Energy Efficient Personalized Hand-Gesture Recognition with Neuromorphic Computing

    Full text link
    Hand gestures are a form of non-verbal communication that is used in social interaction and it is therefore required for more natural human-robot interaction. Neuromorphic (brain-inspired) computing offers a low-power solution for Spiking neural networks (SNNs) that can be used for the classification and recognition of gestures. This article introduces the preliminary results of a novel methodology for training spiking convolutional neural networks for hand-gesture recognition so that a humanoid robot with integrated neuromorphic hardware will be able to personalise the interaction with a user according to the shown hand gesture. It also describes other approaches that could improve the overall performance of the model
    • …
    corecore