thesis

Motion Planning for Socially Competent Robot Navigation

Abstract

Crowded human environments such as pedestrian scenes constitute challenging domains for mobile robot navigation, for a variety of reasons including the heterogeneity of pedestrians’ decision-making mechanisms, the lack of channels of explicit communication among them and the lack of universal rules or social conventions regulating traffic. Despite these complications, humans exhibit socially competent navigation through coordination, realized with implicit communication via a variety of modalities such as path shape and body posture. Sophisticated mechanisms of inference and decision-making allow them to understand subtle communication signals and encode them into their own actions. Although the problem of planning socially competent robot navigation has received significant attention over the past three decades, state-of-the-art approaches tend to explicitly focus on reproducing selected social norms or directly imitating observed human behaviors, while often lack of extensive and thorough validation procedures, thus raising questions about their generalization and reproducibility. This thesis introduces a family of planning algorithms, inspired by studies on human navigation. Our algorithms are designed to produce socially competent robot navigation behaviors by leveraging the existing mechanisms of implicit coordination in humans. We model multi-agent motion coordination through a series of data structures, based on mathematical abstractions from low-dimensional topology and physics, that capture fundamental properties of multi-agent collision avoidance. These models enable a robot to anticipate the effects of its actions on the inference and decision-making processes of nearby agents and allow for the generation of motion that is compliant with the unfolding evolution of the scene and consistent with the robot’s intentions. The introduced planning algorithms are supported by extensive simulated and experimental validation. Key findings include: (1) evidence extracted from a series of simulated studies, suggesting that the outlined planning architecture indeed results in effective coordination within groups of non-communicating agents in a variety of simulated scenarios; (2) evidence extracted from an online, video-based user study with more than 180 participants, indicating that humans perceive the motion generated by our framework as intent-expressive; (3) evidence extracted from an experimental study, conducted in a controlled lab environment with 105 human participants, suggesting that humans follow low-acceleration paths when navigating next to a robot running our framework

    Similar works

    Full text

    thumbnail-image

    Available Versions