3,032 research outputs found

    Human Motion Trajectory Prediction: A Survey

    Full text link
    With growing numbers of intelligent autonomous systems in human environments, the ability of such systems to perceive, understand and anticipate human behavior becomes increasingly important. Specifically, predicting future positions of dynamic agents and planning considering such predictions are key tasks for self-driving vehicles, service robots and advanced surveillance systems. This paper provides a survey of human motion trajectory prediction. We review, analyze and structure a large selection of work from different communities and propose a taxonomy that categorizes existing methods based on the motion modeling approach and level of contextual information used. We provide an overview of the existing datasets and performance metrics. We discuss limitations of the state of the art and outline directions for further research.Comment: Submitted to the International Journal of Robotics Research (IJRR), 37 page

    Limited Visibility and Uncertainty Aware Motion Planning for Automated Driving

    Full text link
    Adverse weather conditions and occlusions in urban environments result in impaired perception. The uncertainties are handled in different modules of an automated vehicle, ranging from sensor level over situation prediction until motion planning. This paper focuses on motion planning given an uncertain environment model with occlusions. We present a method to remain collision free for the worst-case evolution of the given scene. We define criteria that measure the available margins to a collision while considering visibility and interactions, and consequently integrate conditions that apply these criteria into an optimization-based motion planner. We show the generality of our method by validating it in several distinct urban scenarios

    Reinforcement Learning for Mobile Robot Collision Avoidance in Navigation Tasks

    Get PDF
    Collision avoidance is fundamental for mobile robot navigation. In general, its solutions include: {\it map-based} and {\it mapless approaches.} In the map-based approach, robots pre-plan collision-free paths based on an environment map and follow their paths during navigation. On the other hand, the mapless approach requires robots to avoid collisions without referencing to an environment map. This thesis first studies the map-based approach for multiple robots to collectively build environment maps. In this study, a robot following a pre-planned path may encounter unexpected obstacles, such as other moving robots and obstacles inaccurately presented on an environment map. This motivates us to study mapless collision avoidance in the second part of the thesis. Mapless collision avoidance requires a robot to infer an optimal action based on sensor data and operate in real time. Inferring an optimal action in a timely manner is computationally expensive, particularly when a robot has limited on-board computing resources. To avoid the expensive online action inferring, this thesis presents a reinforcement learning approach which learns policies for mapless collision avoidance under real-world settings. We first propose a Real-Time Actor-Critic Architecture (RTAC) to support asynchronous reinforcement learning under real-time constraint. Based on RTAC, we propose asynchronous reinforcement learning methods for mapless collision avoidance of various numbers of robots under different environment configurations. Through extensive experiments, we demonstrate that RTAC serves as a solid foundation to support multi-task and multi-agent learning for mapless collision avoidance under asynchronous settings

    Deep Network Uncertainty Maps for Indoor Navigation

    Full text link
    Most mobile robots for indoor use rely on 2D laser scanners for localization, mapping and navigation. These sensors, however, cannot detect transparent surfaces or measure the full occupancy of complex objects such as tables. Deep Neural Networks have recently been proposed to overcome this limitation by learning to estimate object occupancy. These estimates are nevertheless subject to uncertainty, making the evaluation of their confidence an important issue for these measures to be useful for autonomous navigation and mapping. In this work we approach the problem from two sides. First we discuss uncertainty estimation in deep models, proposing a solution based on a fully convolutional neural network. The proposed architecture is not restricted by the assumption that the uncertainty follows a Gaussian model, as in the case of many popular solutions for deep model uncertainty estimation, such as Monte-Carlo Dropout. We present results showing that uncertainty over obstacle distances is actually better modeled with a Laplace distribution. Then, we propose a novel approach to build maps based on Deep Neural Network uncertainty models. In particular, we present an algorithm to build a map that includes information over obstacle distance estimates while taking into account the level of uncertainty in each estimate. We show how the constructed map can be used to increase global navigation safety by planning trajectories which avoid areas of high uncertainty, enabling higher autonomy for mobile robots in indoor settings.Comment: Accepted for publication in "2019 IEEE-RAS International Conference on Humanoid Robots (Humanoids)
    • …
    corecore