12 research outputs found
Social and Scene-Aware Trajectory Prediction in Crowded Spaces
Mimicking human ability to forecast future positions or interpret complex
interactions in urban scenarios, such as streets, shopping malls or squares, is
essential to develop socially compliant robots or self-driving cars. Autonomous
systems may gain advantage on anticipating human motion to avoid collisions or
to naturally behave alongside people. To foresee plausible trajectories, we
construct an LSTM (long short-term memory)-based model considering three
fundamental factors: people interactions, past observations in terms of
previously crossed areas and semantics of surrounding space. Our model
encompasses several pooling mechanisms to join the above elements defining
multiple tensors, namely social, navigation and semantic tensors. The network
is tested in unstructured environments where complex paths emerge according to
both internal (intentions) and external (other people, not accessible areas)
motivations. As demonstrated, modeling paths unaware of social interactions or
context information, is insufficient to correctly predict future positions.
Experimental results corroborate the effectiveness of the proposed framework in
comparison to LSTM-based models for human path prediction.Comment: Accepted to ICCV 2019 Workshop on Assistive Computer Vision and
Robotics (ACVR
HGCN-GJS: Hierarchical Graph Convolutional Network with Groupwise Joint Sampling for Trajectory Prediction
Accurate pedestrian trajectory prediction is of great importance for
downstream tasks such as autonomous driving and mobile robot navigation. Fully
investigating the social interactions within the crowd is crucial for accurate
pedestrian trajectory prediction. However, most existing methods do not capture
group level interactions well, focusing only on pairwise interactions and
neglecting group-wise interactions. In this work, we propose a hierarchical
graph convolutional network, HGCN-GJS, for trajectory prediction which well
leverages group level interactions within the crowd. Furthermore, we introduce
a novel joint sampling scheme for modeling the joint distribution of multiple
pedestrians in the future trajectories. Based on the group information, this
scheme associates the trajectory of one person with the trajectory of other
people in the group, but maintains the independence of the trajectories of
outsiders. We demonstrate the performance of our network on several trajectory
prediction datasets, achieving state-of-the-art results on all datasets
considered.Comment: 8 pages, 5 figures, in submission to conferenc
How Do We Move: Modeling Human Movement with System Dynamics
Modeling how human moves in the space is useful for policy-making in
transportation, public safety, and public health. Human movements can be viewed
as a dynamic process that human transits between states (\eg, locations) over
time. In the human world where intelligent agents like humans or vehicles with
human drivers play an important role, the states of agents mostly describe
human activities, and the state transition is influenced by both the human
decisions and physical constraints from the real-world system (\eg, agents need
to spend time to move over a certain distance). Therefore, the modeling of
state transition should include the modeling of the agent's decision process
and the physical system dynamics. In this paper, we propose \ours to model
state transition in human movement from a novel perspective, by learning the
decision model and integrating the system dynamics. \ours learns the human
movement with Generative Adversarial Imitation Learning and integrates the
stochastic constraints from system dynamics in the learning process. To the
best of our knowledge, we are the first to learn to model the state transition
of moving agents with system dynamics. In extensive experiments on real-world
datasets, we demonstrate that the proposed method can generate trajectories
similar to real-world ones, and outperform the state-of-the-art methods in
predicting the next location and generating long-term future trajectories.Comment: Accepted by AAAI 2021, Appendices included. 12 pages, 8 figures. in
Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence
(AAAI'21), Feb 202
Multiple Trajectory Prediction of Moving Agents with Memory Augmented Networks
Pedestrians and drivers are expected to safely navigate complex urban environments along with several non cooperating agents. Autonomous vehicles will soon replicate this capability. Each agent acquires a representation of the world from an egocentric perspective and must make decisions ensuring safety for itself and others. This requires to predict motion patterns of observed agents for a far enough future. In this paper we propose MANTRA, a model that exploits memory augmented networks to effectively predict multiple trajectories of other agents, observed from an egocentric perspective. Our model stores observations in memory and uses trained controllers to write meaningful pattern encodings and read trajectories that are most likely to occur in future. We show that our method is able to natively perform multi-modal trajectory prediction obtaining state-of-the art results on four datasets. Moreover, thanks to the non-parametric nature of the memory module, we show how once trained our system can continuously improve by ingesting novel patterns