4 research outputs found

    Toward Human-Like Social Robot Navigation: A Large-Scale, Multi-Modal, Social Human Navigation Dataset

    Full text link
    Humans are well-adept at navigating public spaces shared with others, where current autonomous mobile robots still struggle: while safely and efficiently reaching their goals, humans communicate their intentions and conform to unwritten social norms on a daily basis; conversely, robots become clumsy in those daily social scenarios, getting stuck in dense crowds, surprising nearby pedestrians, or even causing collisions. While recent research on robot learning has shown promises in data-driven social robot navigation, good-quality training data is still difficult to acquire through either trial and error or expert demonstrations. In this work, we propose to utilize the body of rich, widely available, social human navigation data in many natural human-inhabited public spaces for robots to learn similar, human-like, socially compliant navigation behaviors. To be specific, we design an open-source egocentric data collection sensor suite wearable by walking humans to provide multi-modal robot perception data; we collect a large-scale (~50 km, 10 hours, 150 trials, 7 humans) dataset in a variety of public spaces which contain numerous natural social navigation interactions; we analyze our dataset, demonstrate its usability, and point out future research directions and use cases

    SPATIAL PERCEPTION AND ROBOT OPERATION: THE RELATIONSHIP BETWEEN VISUAL SPATIAL ABILITY AND PERFORMANCE UNDER DIRECT LINE OF SIGHT AND TELEOPERATION

    Get PDF
    This dissertation investigated the relationship between the spatial perception abilities of operators and robot operation under direct-line-of-sight and teleoperation viewing conditions. This study was an effort to determine if spatial ability testing may be a useful tool in the selection of human-robot interaction (HRI) operators. Participants completed eight cognitive ability measures and operated one of four types of robots under tasks of low and high difficulty. Performance for each participant was tested during both direct-line-of-sight and teleoperation. These results provide additional evidence that spatial perception abilities are reliable predictors of direct-line-of-sight and teleoperation performance. Participants in this study with higher spatial abilities performed faster, with fewer errors, and less variability. In addition, participants with higher spatial abilities were more successful in the accumulation of points. Applications of these findings are discussed in terms of teleoperator selection tools and HRI training and design recommendations with a human-centered design approach
    corecore