2 research outputs found

    A Cloud-Based Extensible Avatar For Human Robot Interaction

    Get PDF
    Adding an interactive avatar to a human-robot interface requires the development of tools that animate the avatar so as to simulate an intelligent conversation partner. Here we describe a toolkit that supports interactive avatar modeling for human-computer interaction. The toolkit utilizes cloud-based speech-to-text software that provides active listening, a cloud-based AI to generate appropriate textual responses to user queries, and a cloud-based text-to-speech generation engine to generate utterances for this text. This output is combined with a cloud-based 3D avatar animation synchronized to the spoken response. Generated text responses are embedded within an XML structure that allows for tuning the nature of the avatar animation to simulate different emotional states. An expression package controls the avatar's facial expressions. The introduced rendering latency is obscured through parallel processing and an idle loop process that animates the avatar between utterances. The efficiency of the approach is validated through a formal user study

    Tangible interfaces for robot teleoperation

    No full text
    In this paper we present some results obtained through an experimental evaluation of tangible user interfaces (TUIs), comparing their novel interaction paradigms with more conventional interfaces, such as a joypad and a keyboard. Our main goal is to make a formal assessment of TUIs in robotics through a rigorous and extensive experimental evaluation. Firstly, we identified the main benefits of TUIs for robot teleoperation in a urban search and rescue task. Secondly, we provide an evaluation framework to allow for an effective comparison of tangible interfaces with other input devices
    corecore