26,509 research outputs found
Folded Recurrent Neural Networks for Future Video Prediction
Future video prediction is an ill-posed Computer Vision problem that recently
received much attention. Its main challenges are the high variability in video
content, the propagation of errors through time, and the non-specificity of the
future frames: given a sequence of past frames there is a continuous
distribution of possible futures. This work introduces bijective Gated
Recurrent Units, a double mapping between the input and output of a GRU layer.
This allows for recurrent auto-encoders with state sharing between encoder and
decoder, stratifying the sequence representation and helping to prevent
capacity problems. We show how with this topology only the encoder or decoder
needs to be applied for input encoding and prediction, respectively. This
reduces the computational cost and avoids re-encoding the predictions when
generating a sequence of frames, mitigating the propagation of errors.
Furthermore, it is possible to remove layers from an already trained model,
giving an insight to the role performed by each layer and making the model more
explainable. We evaluate our approach on three video datasets, outperforming
state of the art prediction results on MMNIST and UCF101, and obtaining
competitive results on KTH with 2 and 3 times less memory usage and
computational cost than the best scored approach.Comment: Submitted to European Conference on Computer Visio
Video Synthesis from the StyleGAN Latent Space
Generative models have shown impressive results in generating synthetic images. However, video synthesis is still difficult to achieve, even for these generative models. The best videos that generative models can currently create are a few seconds long, distorted, and low resolution. For this project, I propose and implement a model to synthesize videos at 1024x1024x32 resolution that include human facial expressions by using static images generated from a Generative Adversarial Network trained on the human facial images. To the best of my knowledge, this is the first work that generates realistic videos that are larger than 256x256 resolution from single starting images. This model improves the video synthesis in both quantitative and qualitative ways compared to two state-of-the-art models: TGAN and MocoGAN. In a quantitative comparison, this project reaches a best Average Content Distance (ACD) score of 0.167, as compared to 0.305 and 0.201 of TGAN and MocoGAN, respectively
- …