1,517 research outputs found
Applying Deep Bidirectional LSTM and Mixture Density Network for Basketball Trajectory Prediction
Data analytics helps basketball teams to create tactics. However, manual data
collection and analytics are costly and ineffective. Therefore, we applied a
deep bidirectional long short-term memory (BLSTM) and mixture density network
(MDN) approach. This model is not only capable of predicting a basketball
trajectory based on real data, but it also can generate new trajectory samples.
It is an excellent application to help coaches and players decide when and
where to shoot. Its structure is particularly suitable for dealing with time
series problems. BLSTM receives forward and backward information at the same
time, while stacking multiple BLSTMs further increases the learning ability of
the model. Combined with BLSTMs, MDN is used to generate a multi-modal
distribution of outputs. Thus, the proposed model can, in principle, represent
arbitrary conditional probability distributions of output variables. We tested
our model with two experiments on three-pointer datasets from NBA SportVu data.
In the hit-or-miss classification experiment, the proposed model outperformed
other models in terms of the convergence speed and accuracy. In the trajectory
generation experiment, eight model-generated trajectories at a given time
closely matched real trajectories
Folded Recurrent Neural Networks for Future Video Prediction
Future video prediction is an ill-posed Computer Vision problem that recently
received much attention. Its main challenges are the high variability in video
content, the propagation of errors through time, and the non-specificity of the
future frames: given a sequence of past frames there is a continuous
distribution of possible futures. This work introduces bijective Gated
Recurrent Units, a double mapping between the input and output of a GRU layer.
This allows for recurrent auto-encoders with state sharing between encoder and
decoder, stratifying the sequence representation and helping to prevent
capacity problems. We show how with this topology only the encoder or decoder
needs to be applied for input encoding and prediction, respectively. This
reduces the computational cost and avoids re-encoding the predictions when
generating a sequence of frames, mitigating the propagation of errors.
Furthermore, it is possible to remove layers from an already trained model,
giving an insight to the role performed by each layer and making the model more
explainable. We evaluate our approach on three video datasets, outperforming
state of the art prediction results on MMNIST and UCF101, and obtaining
competitive results on KTH with 2 and 3 times less memory usage and
computational cost than the best scored approach.Comment: Submitted to European Conference on Computer Visio
- …