2,028 research outputs found

    Spatial context-aware person-following for a domestic robot

    Get PDF
    Domestic robots are in the focus of research in terms of service providers in households and even as robotic companion that share the living space with humans. A major capability of mobile domestic robots that is joint exploration of space. One challenge to deal with this task is how could we let the robots move in space in reasonable, socially acceptable ways so that it will support interaction and communication as a part of the joint exploration. As a step towards this challenge, we have developed a context-aware following behav- ior considering these social aspects and applied these together with a multi-modal person-tracking method to switch between three basic following approaches, namely direction-following, path-following and parallel-following. These are derived from the observation of human-human following schemes and are activated depending on the current spatial context (e.g. free space) and the relative position of the interacting human. A combination of the elementary behaviors is performed in real time with our mobile robot in different environments. First experimental results are provided to demonstrate the practicability of the proposed approach

    User localization during human-robot interaction

    Get PDF
    This paper presents a user localization system based on the fusion of visual information and sound source localization, implemented on a social robot called Maggie. One of the main requisites to obtain a natural interaction between human-human and human-robot is an adequate spatial situation between the interlocutors, that is, to be orientated and situated at the right distance during the conversation in order to have a satisfactory communicative process. Our social robot uses a complete multimodal dialog system which manages the user-robot interaction during the communicative process. One of its main components is the presented user localization system. To determine the most suitable allocation of the robot in relation to the user, a proxemic study of the human-robot interaction is required, which is described in this paper. The study has been made with two groups of users: children, aged between 8 and 17, and adults. Finally, at the end of the paper, experimental results with the proposed multimodal dialog system are presented.The authors gratefully acknowledge the funds provided by the Spanish Government through the project “A new approach to social robotics” (AROS), of MICINN (Ministry of Science and Innovation)

    Socially Believable Robots

    Get PDF
    Long-term companionship, emotional attachment and realistic interaction with robots have always been the ultimate sign of technological advancement projected by sci-fi literature and entertainment industry. With the advent of artificial intelligence, we have indeed stepped into an era of socially believable robots or humanoids. Affective computing has enabled the deployment of emotional or social robots to a certain level in social settings like informatics, customer services and health care. Nevertheless, social believability of a robot is communicated through its physical embodiment and natural expressiveness. With each passing year, innovations in chemical and mechanical engineering have facilitated life-like embodiments of robotics; however, still much work is required for developing a “social intelligence” in a robot in order to maintain the illusion of dealing with a real human being. This chapter is a collection of research studies on the modeling of complex autonomous systems. It will further shed light on how different social settings require different levels of social intelligence and what are the implications of integrating a socially and emotionally believable machine in a society driven by behaviors and actions

    SPA: Verbal Interactions between Agents and Avatars in Shared Virtual Environments using Propositional Planning

    Full text link
    We present a novel approach for generating plausible verbal interactions between virtual human-like agents and user avatars in shared virtual environments. Sense-Plan-Ask, or SPA, extends prior work in propositional planning and natural language processing to enable agents to plan with uncertain information, and leverage question and answer dialogue with other agents and avatars to obtain the needed information and complete their goals. The agents are additionally able to respond to questions from the avatars and other agents using natural-language enabling real-time multi-agent multi-avatar communication environments. Our algorithm can simulate tens of virtual agents at interactive rates interacting, moving, communicating, planning, and replanning. We find that our algorithm creates a small runtime cost and enables agents to complete their goals more effectively than agents without the ability to leverage natural-language communication. We demonstrate quantitative results on a set of simulated benchmarks and detail the results of a preliminary user-study conducted to evaluate the plausibility of the virtual interactions generated by SPA. Overall, we find that participants prefer SPA to prior techniques in 84\% of responses including significant benefits in terms of the plausibility of natural-language interactions and the positive impact of those interactions

    Towards a synthetic tutor assistant: The EASEL project and its architecture

    Get PDF
    Robots are gradually but steadily being introduced in our daily lives. A paramount application is that of education, where robots can assume the role of a tutor, a peer or simply a tool to help learners in a specific knowledge domain. Such endeavor posits specific challenges: affective social behavior, proper modelling of the learner’s progress, discrimination of the learner’s utterances, expressions and mental states, which, in turn, require an integrated architecture combining perception, cognition and action. In this paper we present an attempt to improve the current state of robots in the educational domain by introducing the EASEL EU project. Specifically, we introduce the EASEL’s unified robot architecture, an innovative Synthetic Tutor Assistant (STA) whose goal is to interactively guide learners in a science-based learning paradigm, allowing us to achieve such rich multimodal interactions

    Vocal Interactivity in-and-between Humans, Animals, and Robots

    Get PDF
    Almost all animals exploit vocal signals for a range of ecologically motivated purposes: detecting predators/prey and marking territory, expressing emotions, establishing social relations, and sharing information. Whether it is a bird raising an alarm, a whale calling to potential partners, a dog responding to human commands, a parent reading a story with a child, or a business-person accessing stock prices using Siri, vocalization provides a valuable communication channel through which behavior may be coordinated and controlled, and information may be distributed and acquired. Indeed, the ubiquity of vocal interaction has led to research across an extremely diverse array of fields, from assessing animal welfare, to understanding the precursors of human language, to developing voice-based human–machine interaction. Opportunities for cross-fertilization between these fields abound; for example, using artificial cognitive agents to investigate contemporary theories of language grounding, using machine learning to analyze different habitats or adding vocal expressivity to the next generation of language-enabled autonomous social agents. However, much of the research is conducted within well-defined disciplinary boundaries, and many fundamental issues remain. This paper attempts to redress the balance by presenting a comparative review of vocal interaction within-and-between humans, animals, and artificial agents (such as robots), and it identifies a rich set of open research questions that may benefit from an interdisciplinary analysis

    A truly human interface: interacting face-to-face with someone whose words are determined by a computer program

    Get PDF
    We use speech shadowing to create situations wherein people converse in person with a human whose words are determined by a conversational agent computer program. Speech shadowing involves a person (the shadower) repeating vocal stimuli originating from a separate communication source in real-time. Humans shadowing for conversational agent sources (e.g., chat bots) become hybrid agents (“echoborgs”) capable of face-to-face interlocution. We report three studies that investigated people’s experiences interacting with echoborgs and the extent to which echoborgs pass as autonomous humans. First, participants in a Turing Test spoke with a chat bot via either a text interface or an echoborg. Human shadowing did not improve the chat bot’s chance of passing but did increase interrogators’ ratings of how human-like the chat bot seemed. In our second study, participants had to decide whether their interlocutor produced words generated by a chat bot or simply pretended to be one. Compared to those who engaged a text interface, participants who engaged an echoborg were more likely to perceive their interlocutor as pretending to be a chat bot. In our third study, participants were naïve to the fact that their interlocutor produced words generated by a chat bot. Unlike those who engaged a text interface, the vast majority of participants who engaged an echoborg did not sense a robotic interaction. These findings have implications for android science, the Turing Test paradigm, and human–computer interaction. The human body, as the delivery mechanism of communication, fundamentally alters the social psychological dynamics of interactions with machine intelligence
    • 

    corecore