36 research outputs found

    A Review of Verbal and Non-Verbal Human-Robot Interactive Communication

    Get PDF
    In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects of human-robot interaction. Following a historical introduction, and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminating to a unifying discussion, and a forward-looking conclusion

    An Affordance-Based Framework for Human Computation and Human-Computer Collaboration

    Get PDF
    Visual Analytics is “the science of analytical reasoning facilitated by visual interactive interfaces” [70]. The goal of this field is to develop tools and methodologies for approaching problems whose size and complexity render them intractable without the close coupling of both human and machine analysis. Researchers have explored this coupling in many venues: VAST, Vis, InfoVis, CHI, KDD, IUI, and more. While there have been myriad promising examples of human-computer collaboration, there exists no common language for comparing systems or describing the benefits afforded by designing for such collaboration. We argue that this area would benefit significantly from consensus about the design attributes that define and distinguish existing techniques. In this work, we have reviewed 1,271 papers from many of the top-ranking conferences in visual analytics, human-computer interaction, and visualization. From these, we have identified 49 papers that are representative of the study of human-computer collaborative problem-solving, and provide a thorough overview of the current state-of-the-art. Our analysis has uncovered key patterns of design hinging on human- and machine-intelligence affordances, and also indicates unexplored avenues in the study of this area. The results of this analysis provide a common framework for understanding these seemingly disparate branches of inquiry, which we hope will motivate future work in the field

    Evaluating content generators

    Get PDF
    Evaluating your content generator is a very important task, but difficult to do well. Creating a game content generator in general is much easier than creating a good game content generator—but what is a “good” content generator? That depends very much on what you are trying to create and why. This chapter discusses the importance and the challenges of evaluating content generators, and more generally understanding a generator’s strengths and weaknesses and suitability for your goals. In particular, we discuss two different approaches to evaluating content generators: visualizing the expressive range of generators, and using questionnaires to understand the impact of your generator on the player. These methods could broadly be called top-down and bottom-up methods for evaluating generators.peer-reviewe

    Co-GAIL: Learning Diverse Strategies for Human-Robot Collaboration

    Full text link
    We present a method for learning a human-robot collaboration policy from human-human collaboration demonstrations. An effective robot assistant must learn to handle diverse human behaviors shown in the demonstrations and be robust when the humans adjust their strategies during online task execution. Our method co-optimizes a human policy and a robot policy in an interactive learning process: the human policy learns to generate diverse and plausible collaborative behaviors from demonstrations while the robot policy learns to assist by estimating the unobserved latent strategy of its human collaborator. Across a 2D strategy game, a human-robot handover task, and a multi-step collaborative manipulation task, our method outperforms the alternatives in both simulated evaluations and when executing the tasks with a real human operator in-the-loop. Supplementary materials and videos at https://sites.google.com/view/co-gail-web/homeComment: CoRL 202

    Data-Driven Imitation Learning for a Shopkeeper Robot with Periodically Changing Product Information

    Get PDF
    Data-driven imitation learning enables service robots to learn social interaction behaviors, but these systems cannot adapt after training to changes in the environment, such as changing products in a store. To solve this, a novel learning system that uses neural attention and approximate string matching to copy information from a product information database to its output is proposed. A camera shop interaction dataset was simulated for training/testing. The proposed system was found to outperform a baseline and a previous state of the art in an offline, human-judged evaluation

    On the Margins of the Machine: Heteromation and Robotics

    Get PDF
    Growing interest in robotics in policy and professional circles promises a future where machines will perform many of the social and institutional functions that have traditionally belonged to human beings. This promise is based on the unexamined premise that robots can act autonomously, without much support from their human users. Close examination of current social robots, however, introduces a different image, where human labor is critically needed for any meaningful operation of these systems. Such labor is normally unacknowledged and made invisible in media and academic portrayals of robotic systems. We take issue with this erasure, and seek to bring human labor to the fore. Drawing on the concept of “heteromation,” we illustrate the indispensible role of human labor in the functioning of many of the existing technological systems. Given current uncertainties in the robotic design space, we explore various scenarios for the future development of these systems, and the different ways by which they might unfold.ye

    SocialAI: Benchmarking Socio-Cognitive Abilities in Deep Reinforcement Learning Agents

    Full text link
    Building embodied autonomous agents capable of participating in social interactions with humans is one of the main challenges in AI. Within the Deep Reinforcement Learning (DRL) field, this objective motivated multiple works on embodied language use. However, current approaches focus on language as a communication tool in very simplified and non-diverse social situations: the "naturalness" of language is reduced to the concept of high vocabulary size and variability. In this paper, we argue that aiming towards human-level AI requires a broader set of key social skills: 1) language use in complex and variable social contexts; 2) beyond language, complex embodied communication in multimodal settings within constantly evolving social worlds. We explain how concepts from cognitive sciences could help AI to draw a roadmap towards human-like intelligence, with a focus on its social dimensions. As a first step, we propose to expand current research to a broader set of core social skills. To do this, we present SocialAI, a benchmark to assess the acquisition of social skills of DRL agents using multiple grid-world environments featuring other (scripted) social agents. We then study the limits of a recent SOTA DRL approach when tested on SocialAI and discuss important next steps towards proficient social agents. Videos and code are available at https://sites.google.com/view/socialai.Comment: under review. This paper extends and generalizes work in arXiv:2104.1320
    corecore