1,343 research outputs found
Connectionist Temporal Modeling for Weakly Supervised Action Labeling
We propose a weakly-supervised framework for action labeling in video, where
only the order of occurring actions is required during training time. The key
challenge is that the per-frame alignments between the input (video) and label
(action) sequences are unknown during training. We address this by introducing
the Extended Connectionist Temporal Classification (ECTC) framework to
efficiently evaluate all possible alignments via dynamic programming and
explicitly enforce their consistency with frame-to-frame visual similarities.
This protects the model from distractions of visually inconsistent or
degenerated alignments without the need of temporal supervision. We further
extend our framework to the semi-supervised case when a few frames are sparsely
annotated in a video. With less than 1% of labeled frames per video, our method
is able to outperform existing semi-supervised approaches and achieve
comparable performance to that of fully supervised approaches.Comment: To appear in ECCV 201
Unconstrained Scene Text and Video Text Recognition for Arabic Script
Building robust recognizers for Arabic has always been challenging. We
demonstrate the effectiveness of an end-to-end trainable CNN-RNN hybrid
architecture in recognizing Arabic text in videos and natural scenes. We
outperform previous state-of-the-art on two publicly available video text
datasets - ALIF and ACTIV. For the scene text recognition task, we introduce a
new Arabic scene text dataset and establish baseline results. For scripts like
Arabic, a major challenge in developing robust recognizers is the lack of large
quantity of annotated data. We overcome this by synthesising millions of Arabic
text images from a large vocabulary of Arabic words and phrases. Our
implementation is built on top of the model introduced here [37] which is
proven quite effective for English scene text recognition. The model follows a
segmentation-free, sequence to sequence transcription approach. The network
transcribes a sequence of convolutional features from the input image to a
sequence of target labels. This does away with the need for segmenting input
image into constituent characters/glyphs, which is often difficult for Arabic
script. Further, the ability of RNNs to model contextual dependencies yields
superior recognition results.Comment: 5 page
Weakly Supervised Action Learning with RNN based Fine-to-coarse Modeling
We present an approach for weakly supervised learning of human actions. Given
a set of videos and an ordered list of the occurring actions, the goal is to
infer start and end frames of the related action classes within the video and
to train the respective action classifiers without any need for hand labeled
frame boundaries. To address this task, we propose a combination of a
discriminative representation of subactions, modeled by a recurrent neural
network, and a coarse probabilistic model to allow for a temporal alignment and
inference over long sequences. While this system alone already generates good
results, we show that the performance can be further improved by approximating
the number of subactions to the characteristics of the different action
classes. To this end, we adapt the number of subaction classes by iterating
realignment and reestimation during training. The proposed system is evaluated
on two benchmark datasets, the Breakfast and the Hollywood extended dataset,
showing a competitive performance on various weak learning tasks such as
temporal action segmentation and action alignment
Action Sets: Weakly Supervised Action Segmentation without Ordering Constraints
Action detection and temporal segmentation of actions in videos are topics of
increasing interest. While fully supervised systems have gained much attention
lately, full annotation of each action within the video is costly and
impractical for large amounts of video data. Thus, weakly supervised action
detection and temporal segmentation methods are of great importance. While most
works in this area assume an ordered sequence of occurring actions to be given,
our approach only uses a set of actions. Such action sets provide much less
supervision since neither action ordering nor the number of action occurrences
are known. In exchange, they can be easily obtained, for instance, from
meta-tags, while ordered sequences still require human annotation. We introduce
a system that automatically learns to temporally segment and label actions in a
video, where the only supervision that is used are action sets. An evaluation
on three datasets shows that our method still achieves good results although
the amount of supervision is significantly smaller than for other related
methods.Comment: CVPR 201
Lipreading with Long Short-Term Memory
Lipreading, i.e. speech recognition from visual-only recordings of a
speaker's face, can be achieved with a processing pipeline based solely on
neural networks, yielding significantly better accuracy than conventional
methods. Feed-forward and recurrent neural network layers (namely Long
Short-Term Memory; LSTM) are stacked to form a single structure which is
trained by back-propagating error gradients through all the layers. The
performance of such a stacked network was experimentally evaluated and compared
to a standard Support Vector Machine classifier using conventional computer
vision features (Eigenlips and Histograms of Oriented Gradients). The
evaluation was performed on data from 19 speakers of the publicly available
GRID corpus. With 51 different words to classify, we report a best word
accuracy on held-out evaluation speakers of 79.6% using the end-to-end neural
network-based solution (11.6% improvement over the best feature-based solution
evaluated).Comment: Accepted for publication at ICASSP 201
Lip Reading Sentences in the Wild
The goal of this work is to recognise phrases and sentences being spoken by a
talking face, with or without the audio. Unlike previous works that have
focussed on recognising a limited number of words or phrases, we tackle lip
reading as an open-world problem - unconstrained natural language sentences,
and in the wild videos.
Our key contributions are: (1) a 'Watch, Listen, Attend and Spell' (WLAS)
network that learns to transcribe videos of mouth motion to characters; (2) a
curriculum learning strategy to accelerate training and to reduce overfitting;
(3) a 'Lip Reading Sentences' (LRS) dataset for visual speech recognition,
consisting of over 100,000 natural sentences from British television.
The WLAS model trained on the LRS dataset surpasses the performance of all
previous work on standard lip reading benchmark datasets, often by a
significant margin. This lip reading performance beats a professional lip
reader on videos from BBC television, and we also demonstrate that visual
information helps to improve speech recognition performance even when the audio
is available
- …