1,720 research outputs found
Pedestrian Trajectory Prediction Using Dynamics-based Deep Learning
Pedestrian trajectory prediction plays an important role in autonomous
driving systems and robotics. Recent work utilising prominent deep learning
models for pedestrian motion prediction makes limited a priori assumptions
about human movements, resulting in a lack of explainability and explicit
constraints enforced on predicted trajectories. This paper presents a
dynamics-based deep learning framework where a novel asymptotically stable
dynamical system is integrated into a deep learning model. Our novel
asymptotically stable dynamical system is used to model human goal-targeted
motion by enforcing the human walking trajectory converges to a predicted goal
position and provides a deep learning model with prior knowledge and
explainability. Our deep learning model utilises recent innovations from
transformer networks and is used to learn some features of human motion, such
as collision avoidance, for our proposed dynamical system. The experimental
results show that our framework outperforms recent prominent models in
pedestrian trajectory prediction on five benchmark human motion datasets.Comment: 8 pages (including references), 5 figures, submitted to ICRA2024 for
revie
Transformer Networks for Trajectory Forecasting
Most recent successes on forecasting the people motion are based on LSTM
models and all most recent progress has been achieved by modelling the social
interaction among people and the people interaction with the scene. We question
the use of the LSTM models and propose the novel use of Transformer Networks
for trajectory forecasting. This is a fundamental switch from the sequential
step-by-step processing of LSTMs to the only-attention-based memory mechanisms
of Transformers. In particular, we consider both the original Transformer
Network (TF) and the larger Bidirectional Transformer (BERT), state-of-the-art
on all natural language processing tasks. Our proposed Transformers predict the
trajectories of the individual people in the scene. These are "simple" model
because each person is modelled separately without any complex human-human nor
scene interaction terms. In particular, the TF model without bells and whistles
yields the best score on the largest and most challenging trajectory
forecasting benchmark of TrajNet. Additionally, its extension which predicts
multiple plausible future trajectories performs on par with more engineered
techniques on the 5 datasets of ETH + UCY. Finally, we show that Transformers
may deal with missing observations, as it may be the case with real sensor
data. Code is available at https://github.com/FGiuliari/Trajectory-Transformer.Comment: 18 pages, 3 figure
Human motion trajectory prediction using the Social Force Model for real-time and low computational cost applications
Human motion trajectory prediction is a very important functionality for
human-robot collaboration, specifically in accompanying, guiding, or
approaching tasks, but also in social robotics, self-driving vehicles, or
security systems. In this paper, a novel trajectory prediction model, Social
Force Generative Adversarial Network (SoFGAN), is proposed. SoFGAN uses a
Generative Adversarial Network (GAN) and Social Force Model (SFM) to generate
different plausible people trajectories reducing collisions in a scene.
Furthermore, a Conditional Variational Autoencoder (CVAE) module is added to
emphasize the destination learning. We show that our method is more accurate in
making predictions in UCY or BIWI datasets than most of the current
state-of-the-art models and also reduces collisions in comparison to other
approaches. Through real-life experiments, we demonstrate that the model can be
used in real-time without GPU's to perform good quality predictions with a low
computational cost
CLiFF-LHMP: Using Spatial Dynamics Patterns for Long-Term Human Motion Prediction
Human motion prediction is important for mobile service robots and
intelligent vehicles to operate safely and smoothly around people. The more
accurate predictions are, particularly over extended periods of time, the
better a system can, e.g., assess collision risks and plan ahead. In this
paper, we propose to exploit maps of dynamics (MoDs, a class of general
representations of place-dependent spatial motion patterns, learned from prior
observations) for long-term human motion prediction (LHMP). We present a new
MoD-informed human motion prediction approach, named CLiFF-LHMP, which is data
efficient, explainable, and insensitive to errors from an upstream tracking
system. Our approach uses CLiFF-map, a specific MoD trained with human motion
data recorded in the same environment. We bias a constant velocity prediction
with samples from the CLiFF-map to generate multi-modal trajectory predictions.
In two public datasets we show that this algorithm outperforms the state of the
art for predictions over very extended periods of time, achieving 45% more
accurate prediction performance at 50s compared to the baseline.Comment: Accepted to the 2023 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS
- …