767 research outputs found
Action Sets: Weakly Supervised Action Segmentation without Ordering Constraints
Action detection and temporal segmentation of actions in videos are topics of
increasing interest. While fully supervised systems have gained much attention
lately, full annotation of each action within the video is costly and
impractical for large amounts of video data. Thus, weakly supervised action
detection and temporal segmentation methods are of great importance. While most
works in this area assume an ordered sequence of occurring actions to be given,
our approach only uses a set of actions. Such action sets provide much less
supervision since neither action ordering nor the number of action occurrences
are known. In exchange, they can be easily obtained, for instance, from
meta-tags, while ordered sequences still require human annotation. We introduce
a system that automatically learns to temporally segment and label actions in a
video, where the only supervision that is used are action sets. An evaluation
on three datasets shows that our method still achieves good results although
the amount of supervision is significantly smaller than for other related
methods.Comment: CVPR 201
Weakly Supervised Action Learning with RNN based Fine-to-coarse Modeling
We present an approach for weakly supervised learning of human actions. Given
a set of videos and an ordered list of the occurring actions, the goal is to
infer start and end frames of the related action classes within the video and
to train the respective action classifiers without any need for hand labeled
frame boundaries. To address this task, we propose a combination of a
discriminative representation of subactions, modeled by a recurrent neural
network, and a coarse probabilistic model to allow for a temporal alignment and
inference over long sequences. While this system alone already generates good
results, we show that the performance can be further improved by approximating
the number of subactions to the characteristics of the different action
classes. To this end, we adapt the number of subaction classes by iterating
realignment and reestimation during training. The proposed system is evaluated
on two benchmark datasets, the Breakfast and the Hollywood extended dataset,
showing a competitive performance on various weak learning tasks such as
temporal action segmentation and action alignment
Fully Convolutional Networks for Continuous Sign Language Recognition
Continuous sign language recognition (SLR) is a challenging task that
requires learning on both spatial and temporal dimensions of signing frame
sequences. Most recent work accomplishes this by using CNN and RNN hybrid
networks. However, training these networks is generally non-trivial, and most
of them fail in learning unseen sequence patterns, causing an unsatisfactory
performance for online recognition. In this paper, we propose a fully
convolutional network (FCN) for online SLR to concurrently learn spatial and
temporal features from weakly annotated video sequences with only
sentence-level annotations given. A gloss feature enhancement (GFE) module is
introduced in the proposed network to enforce better sequence alignment
learning. The proposed network is end-to-end trainable without any
pre-training. We conduct experiments on two large scale SLR datasets.
Experiments show that our method for continuous SLR is effective and performs
well in online recognition.Comment: Accepted to ECCV202
Two-Stream Network for Sign Language Recognition and Translation
Sign languages are visual languages using manual articulations and non-manual
elements to convey information. For sign language recognition and translation,
the majority of existing approaches directly encode RGB videos into hidden
representations. RGB videos, however, are raw signals with substantial visual
redundancy, leading the encoder to overlook the key information for sign
language understanding. To mitigate this problem and better incorporate domain
knowledge, such as handshape and body movement, we introduce a dual visual
encoder containing two separate streams to model both the raw videos and the
keypoint sequences generated by an off-the-shelf keypoint estimator. To make
the two streams interact with each other, we explore a variety of techniques,
including bidirectional lateral connection, sign pyramid network with auxiliary
supervision, and frame-level self-distillation. The resulting model is called
TwoStream-SLR, which is competent for sign language recognition (SLR).
TwoStream-SLR is extended to a sign language translation (SLT) model,
TwoStream-SLT, by simply attaching an extra translation network.
Experimentally, our TwoStream-SLR and TwoStream-SLT achieve state-of-the-art
performance on SLR and SLT tasks across a series of datasets including
Phoenix-2014, Phoenix-2014T, and CSL-Daily.Comment: Accepted by NeurIPS 202
- …