105,513 research outputs found
Tree Memory Networks for Modelling Long-term Temporal Dependencies
In the domain of sequence modelling, Recurrent Neural Networks (RNN) have
been capable of achieving impressive results in a variety of application areas
including visual question answering, part-of-speech tagging and machine
translation. However this success in modelling short term dependencies has not
successfully transitioned to application areas such as trajectory prediction,
which require capturing both short term and long term relationships. In this
paper, we propose a Tree Memory Network (TMN) for modelling long term and short
term relationships in sequence-to-sequence mapping problems. The proposed
network architecture is composed of an input module, controller and a memory
module. In contrast to related literature, which models the memory as a
sequence of historical states, we model the memory as a recursive tree
structure. This structure more effectively captures temporal dependencies
across both short term and long term sequences using its hierarchical
structure. We demonstrate the effectiveness and flexibility of the proposed TMN
in two practical problems, aircraft trajectory modelling and pedestrian
trajectory modelling in a surveillance setting, and in both cases we outperform
the current state-of-the-art. Furthermore, we perform an in depth analysis on
the evolution of the memory module content over time and provide visual
evidence on how the proposed TMN is able to map both long term and short term
relationships efficiently via a hierarchical structure
Long-Term Human Video Generation of Multiple Futures Using Poses
Predicting future human behavior from an input human video is a useful task
for applications such as autonomous driving and robotics. While most previous
works predict a single future, multiple futures with different behavior can
potentially occur. Moreover, if the predicted future is too short (e.g., less
than one second), it may not be fully usable by a human or other systems. In
this paper, we propose a novel method for future human pose prediction capable
of predicting multiple long-term futures. This makes the predictions more
suitable for real applications. Also, from the input video and the predicted
human behavior, we generate future videos. First, from an input human video, we
generate sequences of future human poses (i.e., the image coordinates of their
body-joints) via adversarial learning. Adversarial learning suffers from mode
collapse, which makes it difficult to generate a variety of multiple poses. We
solve this problem by utilizing two additional inputs to the generator to make
the outputs diverse, namely, a latent code (to reflect various behaviors) and
an attraction point (to reflect various trajectories). In addition, we generate
long-term future human poses using a novel approach based on unidimensional
convolutional neural networks. Last, we generate an output video based on the
generated poses for visualization. We evaluate the generated future poses and
videos using three criteria (i.e., realism, diversity and accuracy), and show
that our proposed method outperforms other state-of-the-art works
Video Time: Properties, Encoders and Evaluation
Time-aware encoding of frame sequences in a video is a fundamental problem in
video understanding. While many attempted to model time in videos, an explicit
study on quantifying video time is missing. To fill this lacuna, we aim to
evaluate video time explicitly. We describe three properties of video time,
namely a) temporal asymmetry, b)temporal continuity and c) temporal causality.
Based on each we formulate a task able to quantify the associated property.
This allows assessing the effectiveness of modern video encoders, like C3D and
LSTM, in their ability to model time. Our analysis provides insights about
existing encoders while also leading us to propose a new video time encoder,
which is better suited for the video time recognition tasks than C3D and LSTM.
We believe the proposed meta-analysis can provide a reasonable baseline to
assess video time encoders on equal grounds on a set of temporal-aware tasks.Comment: 14 pages, BMVC 201
- …