7,145,282 research outputs found
Sequence to Sequence -- Video to Text
Real-world videos often have complex dynamics; and methods for generating
open-domain video descriptions should be sensitive to temporal structure and
allow both input (sequence of frames) and output (sequence of words) of
variable length. To approach this problem, we propose a novel end-to-end
sequence-to-sequence model to generate captions for videos. For this we exploit
recurrent neural networks, specifically LSTMs, which have demonstrated
state-of-the-art performance in image caption generation. Our LSTM model is
trained on video-sentence pairs and learns to associate a sequence of video
frames to a sequence of words in order to generate a description of the event
in the video clip. Our model naturally is able to learn the temporal structure
of the sequence of frames as well as the sequence model of the generated
sentences, i.e. a language model. We evaluate several variants of our model
that exploit different visual features on a standard set of YouTube videos and
two movie description datasets (M-VAD and MPII-MD).Comment: ICCV 2015 camera-ready. Includes code, project page and LSMDC
challenge result
Scope & Sequence
WELCOME to WINDOWS on the INQUIRY CLASSROOM!
You have landed on a piece of a National Science Foundation Project (DUE 1245730) directed by Professor Chris Bauer, Chemistry Department, University of New Hampshire. This is one part of a completely documented inquiry-based university science course called “Fire & Ice” which explores the nature of heat and temperature. There are multiple video perspectives and commentary from instructors and students, and documents of all course materials (agenda, instructions, student work). It’s too complicated to explain here. Take a look at the user orientation document at this link
Bandit Structured Prediction for Neural Sequence-to-Sequence Learning
Bandit structured prediction describes a stochastic optimization framework
where learning is performed from partial feedback. This feedback is received in
the form of a task loss evaluation to a predicted output structure, without
having access to gold standard structures. We advance this framework by lifting
linear bandit learning to neural sequence-to-sequence learning problems using
attention-based recurrent neural networks. Furthermore, we show how to
incorporate control variates into our learning algorithms for variance
reduction and improved generalization. We present an evaluation on a neural
machine translation task that shows improvements of up to 5.89 BLEU points for
domain adaptation from simulated bandit feedback.Comment: ACL 201
- …
