20,657 research outputs found
Location Dependency in Video Prediction
Deep convolutional neural networks are used to address many computer vision
problems, including video prediction. The task of video prediction requires
analyzing the video frames, temporally and spatially, and constructing a model
of how the environment evolves. Convolutional neural networks are spatially
invariant, though, which prevents them from modeling location-dependent
patterns. In this work, the authors propose location-biased convolutional
layers to overcome this limitation. The effectiveness of location bias is
evaluated on two architectures: Video Ladder Network (VLN) and Convolutional
redictive Gating Pyramid (Conv-PGP). The results indicate that encoding
location-dependent features is crucial for the task of video prediction. Our
proposed methods significantly outperform spatially invariant models.Comment: International Conference on Artificial Neural Networks. Springer,
Cham, 201
Beyond Monte Carlo Tree Search: Playing Go with Deep Alternative Neural Network and Long-Term Evaluation
Monte Carlo tree search (MCTS) is extremely popular in computer Go which
determines each action by enormous simulations in a broad and deep search tree.
However, human experts select most actions by pattern analysis and careful
evaluation rather than brute search of millions of future nteractions. In this
paper, we propose a computer Go system that follows experts way of thinking and
playing. Our system consists of two parts. The first part is a novel deep
alternative neural network (DANN) used to generate candidates of next move.
Compared with existing deep convolutional neural network (DCNN), DANN inserts
recurrent layer after each convolutional layer and stacks them in an
alternative manner. We show such setting can preserve more contexts of local
features and its evolutions which are beneficial for move prediction. The
second part is a long-term evaluation (LTE) module used to provide a reliable
evaluation of candidates rather than a single probability from move predictor.
This is consistent with human experts nature of playing since they can foresee
tens of steps to give an accurate estimation of candidates. In our system, for
each candidate, LTE calculates a cumulative reward after several future
interactions when local variations are settled. Combining criteria from the two
parts, our system determines the optimal choice of next move. For more
comprehensive experiments, we introduce a new professional Go dataset (PGD),
consisting of 253233 professional records. Experiments on GoGoD and PGD
datasets show the DANN can substantially improve performance of move prediction
over pure DCNN. When combining LTE, our system outperforms most relevant
approaches and open engines based on MCTS.Comment: AAAI 201
Folded Recurrent Neural Networks for Future Video Prediction
Future video prediction is an ill-posed Computer Vision problem that recently
received much attention. Its main challenges are the high variability in video
content, the propagation of errors through time, and the non-specificity of the
future frames: given a sequence of past frames there is a continuous
distribution of possible futures. This work introduces bijective Gated
Recurrent Units, a double mapping between the input and output of a GRU layer.
This allows for recurrent auto-encoders with state sharing between encoder and
decoder, stratifying the sequence representation and helping to prevent
capacity problems. We show how with this topology only the encoder or decoder
needs to be applied for input encoding and prediction, respectively. This
reduces the computational cost and avoids re-encoding the predictions when
generating a sequence of frames, mitigating the propagation of errors.
Furthermore, it is possible to remove layers from an already trained model,
giving an insight to the role performed by each layer and making the model more
explainable. We evaluate our approach on three video datasets, outperforming
state of the art prediction results on MMNIST and UCF101, and obtaining
competitive results on KTH with 2 and 3 times less memory usage and
computational cost than the best scored approach.Comment: Submitted to European Conference on Computer Visio
- …