13 research outputs found

    Generating spatial referring expressions in a social robot: Dynamic vs. non-ambiguous

    Get PDF
    Generating spatial referring expressions is key to allowing robots to communicate with people in an environment. The focus of most algorithms for generation is to create a non-ambiguous description, and how best to deal with the combination explosion this can create in a complex environment. However, this is not how people naturally communicate. Humans tend to give an under-specified description and then rely on a strategy of repair to reduce the number of possible locations or objects until the correct one is identified, what we refer to here as a dynamic description. We present here a method for generating these dynamic descriptions for Human Robot Interaction, using machine learning to generate repair statements. We also present a study with 61 participants in a task on object placement. This task was presented in a 2D environment that favored a non-ambiguous description. In this study we demonstrate that our dynamic method of communication can be more efficient for people to identify a location compared to one that is non-ambiguous

    The blame game: double standards apply to autonomous vehicle accidents

    Get PDF
    Who is to blame when autonomous vehicles are involved in accidents? We report findings from an online study in which the attribution of blame and trust were measured from 206 participants who studied 18 hypothetical vignettes portraying traffic incidents under different driving environments. The focal vehicle involved in the incident was either controlled by a human driver or autonomous system. The accident severity also varied from near miss, minor accident to major accident. Participants applied double standards when assigning blame to humans and autonomous systems: an autonomous system was usually blamed more than a human driver for executing the same actions under the same circumstances with the same consequences. These findings not only have important implications to AI-related legislation, but also highlight the necessity to promote the design of robots and other automation systems which can help calibrate public perceptions and expectations of their characteristics and capabilities

    The effectiveness of dynamically processed incremental descriptions in human robot interaction

    Get PDF
    We explore the effectiveness of a dynamically processed incremental referring description system using under-specified ambiguous descriptions that are then built upon using linguistic repair statements, which we refer to as a dynamic system. We build a dynamically processed incremental referring description generation system that is able to provide contextual navigational statements to describe an object in a potential real-world situation of nuclear waste sorting and maintenance. In a study of 31 participants, we test the dynamic system in a case where a user is remote operating a robot to sort nuclear waste, with the robot assisting them in identifying the correct barrels to be removed. We compare these against a static non-ambiguous description given in the same scenario. As well as looking at efficiency with time and distance measurements, we also look at user preference. Results show that our dynamic system was a much more efficient method—taking only 62% of the time on average—for finding the correct barrel. Participants also favoured our dynamic system

    A systematic review of familiarisation methods used in human-robot interactions for autistic participants

    Get PDF
    There is a growing need for standardised familiarisation techniques within the Human-Robot Interaction (HRI) community. This is particularly the case when considering autistic participants, who may have difficulties with the novelty and sensory stimulation associated with meeting a robot. Familiarisation techniques should be considered critical to research, both from an ethical perspective and to achieve research best practice, and are also important in applied settings. In the absence of standardised familiarisation protocols, we conducted a systematic review in accordance with PRISMA guidelines to better understand the range of familiarisation methods used in studies of HRIs with autistic participants. We searched for papers from four different databases: PubMed, Scopus, Web of Science and Science Direct. We identified 387 articles that involved HRIs with autistic participants. The majority did not mention a familiarisation phase (n = 285). A further 52 mentioned including familiarisation but without any description. 50 studies described their familiarisation. Based on a synthesis of these papers, we identified six familiarisation techniques that are commonly used. Using co-production techniques with the autistic community and other participant groups, future studies should validate and critically evaluate the approaches identified in this review. In order to help facilitate improved reporting and critical evaluation of familiarisation approaches across studies we have setup a familiarisation repository

    Guidelines for Designing Social Robots as Second Language Tutors

    Get PDF
    In recent years, it has been suggested that social robots have potential as tutors and educators for both children and adults. While robots have been shown to be effective in teaching knowledge and skill-based topics, we wish to explore how social robots can be used to tutor a second language to young children. As language learning relies on situated, grounded and social learning, in which interaction and repeated practice are central, social robots hold promise as educational tools for supporting second language learning. This paper surveys the developmental psychology of second language learning and suggests an agenda to study how core concepts of second language learning can be taught by a social robot. It suggests guidelines for designing robot tutors based on observations of second language learning in human–human scenarios, various technical aspects and early studies regarding the effectiveness of social robots as second language tutors

    Public perception of autonomous vehicle capability determines judgment of blame and trust in road traffic accidents

    No full text
    Road accidents involving autonomous vehicles (AVs) will not only introduce legal challenges over liability distribution but also generally diminish the public trust that may make itself manifested in slowing the initial adoption of the technology and call into question the continued adoption of the technology. Understanding the public’s reactions to such incidents, especially the way they differentiate from conventional vehicles, is vital for future policy-making and legislation, which will in turn shape the landscape of the autonomous vehicle industry. In this paper, intuitive judgments of blame and trust were investigated in simulated scenarios of road-traffic accidents involving either autonomous vehicles or human-driven vehicles. In an initial study, five of six scenarios showed more blame and less trust attributed to autonomous vehicles, despite the scenarios being identical in antecedents and consequences to those with a human driver. In one scenario this asymmetry was sharply reversed; an anomaly shown in a follow-up experiment to be dependent on the extent to which the incident was more likely to be foreseeable by the human driver. More generally these studies show—rather than being the result of a universal higher performance standard against autonomous vehicles—that blame and trust are shaped by stereotypical conceptions of the capabilities of machines versus humans applied in a context-specific way, which may or may not align with objectively derived state of affairs. These findings point to the necessity of regularly calibrating the public’s knowledge and expectation of autonomous vehicles through educational campaigns and legislative measures mandating user training and timely disclosure from car manufacturers/developers regarding their product capabilities.License full text: CC BY 4.0 Funder: ESRC-JST project: Rule of Law in the Age of AI: Principles of Distributive Liability for Multi-Agent Societies </p
    corecore