1,400 research outputs found
Weakly-Supervised Action Segmentation with Iterative Soft Boundary Assignment
In this work, we address the task of weakly-supervised human action
segmentation in long, untrimmed videos. Recent methods have relied on expensive
learning models, such as Recurrent Neural Networks (RNN) and Hidden Markov
Models (HMM). However, these methods suffer from expensive computational cost,
thus are unable to be deployed in large scale. To overcome the limitations, the
keys to our design are efficiency and scalability. We propose a novel action
modeling framework, which consists of a new temporal convolutional network,
named Temporal Convolutional Feature Pyramid Network (TCFPN), for predicting
frame-wise action labels, and a novel training strategy for weakly-supervised
sequence modeling, named Iterative Soft Boundary Assignment (ISBA), to align
action sequences and update the network in an iterative fashion. The proposed
framework is evaluated on two benchmark datasets, Breakfast and Hollywood
Extended, with four different evaluation metrics. Extensive experimental
results show that our methods achieve competitive or superior performance to
state-of-the-art methods.Comment: CVPR 201
Action Recognition from Single Timestamp Supervision in Untrimmed Videos
Recognising actions in videos relies on labelled supervision during training,
typically the start and end times of each action instance. This supervision is
not only subjective, but also expensive to acquire. Weak video-level
supervision has been successfully exploited for recognition in untrimmed
videos, however it is challenged when the number of different actions in
training videos increases. We propose a method that is supervised by single
timestamps located around each action instance, in untrimmed videos. We replace
expensive action bounds with sampling distributions initialised from these
timestamps. We then use the classifier's response to iteratively update the
sampling distributions. We demonstrate that these distributions converge to the
location and extent of discriminative action segments. We evaluate our method
on three datasets for fine-grained recognition, with increasing number of
different actions per video, and show that single timestamps offer a reasonable
compromise between recognition performance and labelling effort, performing
comparably to full temporal supervision. Our update method improves top-1 test
accuracy by up to 5.4%. across the evaluated datasets.Comment: CVPR 201
Temporal Action Segmentation: An Analysis of Modern Techniques
Temporal action segmentation (TAS) in videos aims at densely identifying
video frames in minutes-long videos with multiple action classes. As a
long-range video understanding task, researchers have developed an extended
collection of methods and examined their performance using various benchmarks.
Despite the rapid growth of TAS techniques in recent years, no systematic
survey has been conducted in these sectors. This survey analyzes and summarizes
the most significant contributions and trends. In particular, we first examine
the task definition, common benchmarks, types of supervision, and prevalent
evaluation measures. In addition, we systematically investigate two essential
techniques of this topic, i.e., frame representation and temporal modeling,
which have been studied extensively in the literature. We then conduct a
thorough review of existing TAS works categorized by their levels of
supervision and conclude our survey by identifying and emphasizing several
research gaps. In addition, we have curated a list of TAS resources, which is
available at https://github.com/nus-cvml/awesome-temporal-action-segmentation.Comment: 19 pages, 9 figures, 8 table
- …