574 research outputs found
Weakly-supervised Temporal Action Localization by Uncertainty Modeling
Weakly-supervised temporal action localization aims to learn detecting
temporal intervals of action classes with only video-level labels. To this end,
it is crucial to separate frames of action classes from the background frames
(i.e., frames not belonging to any action classes). In this paper, we present a
new perspective on background frames where they are modeled as
out-of-distribution samples regarding their inconsistency. Then, background
frames can be detected by estimating the probability of each frame being
out-of-distribution, known as uncertainty, but it is infeasible to directly
learn uncertainty without frame-level labels. To realize the uncertainty
learning in the weakly-supervised setting, we leverage the multiple instance
learning formulation. Moreover, we further introduce a background entropy loss
to better discriminate background frames by encouraging their in-distribution
(action) probabilities to be uniformly distributed over all action classes.
Experimental results show that our uncertainty modeling is effective at
alleviating the interference of background frames and brings a large
performance gain without bells and whistles. We demonstrate that our model
significantly outperforms state-of-the-art methods on the benchmarks, THUMOS'14
and ActivityNet (1.2 & 1.3). Our code is available at
https://github.com/Pilhyeon/WTAL-Uncertainty-Modeling.Comment: Accepted by the 35th AAAI Conference on Artificial Intelligence (AAAI
2021
D2-Net: Weakly-Supervised Action Localization via Discriminative Embeddings and Denoised Activations
This work proposes a weakly-supervised temporal action localization
framework, called D2-Net, which strives to temporally localize actions using
video-level supervision. Our main contribution is the introduction of a novel
loss formulation, which jointly enhances the discriminability of latent
embeddings and robustness of the output temporal class activations with respect
to foreground-background noise caused by weak supervision. The proposed
formulation comprises a discriminative and a denoising loss term for enhancing
temporal action localization. The discriminative term incorporates a
classification loss and utilizes a top-down attention mechanism to enhance the
separability of latent foreground-background embeddings. The denoising loss
term explicitly addresses the foreground-background noise in class activations
by simultaneously maximizing intra-video and inter-video mutual information
using a bottom-up attention mechanism. As a result, activations in the
foreground regions are emphasized whereas those in the background regions are
suppressed, thereby leading to more robust predictions. Comprehensive
experiments are performed on two benchmarks: THUMOS14 and ActivityNet1.2. Our
D2-Net performs favorably in comparison to the existing methods on both
datasets, achieving gains as high as 3.6% in terms of mean average precision on
THUMOS14
Adversarial Background-Aware Loss for Weakly-supervised Temporal Activity Localization
Temporally localizing activities within untrimmed videos has been extensively
studied in recent years. Despite recent advances, existing methods for
weakly-supervised temporal activity localization struggle to recognize when an
activity is not occurring. To address this issue, we propose a novel method
named A2CL-PT. Two triplets of the feature space are considered in our
approach: one triplet is used to learn discriminative features for each
activity class, and the other one is used to distinguish the features where no
activity occurs (i.e. background features) from activity-related features for
each video. To further improve the performance, we build our network using two
parallel branches which operate in an adversarial way: the first branch
localizes the most salient activities of a video and the second one finds other
supplementary activities from non-localized parts of the video. Extensive
experiments performed on THUMOS14 and ActivityNet datasets demonstrate that our
proposed method is effective. Specifically, the average mAP of IoU thresholds
from 0.1 to 0.9 on the THUMOS14 dataset is significantly improved from 27.9% to
30.0%.Comment: ECCV 2020 camera ready (Supplementary material: on ECVA soon
TRIE++: Towards End-to-End Information Extraction from Visually Rich Documents
Recently, automatically extracting information from visually rich documents
(e.g., tickets and resumes) has become a hot and vital research topic due to
its widespread commercial value. Most existing methods divide this task into
two subparts: the text reading part for obtaining the plain text from the
original document images and the information extraction part for extracting key
contents. These methods mainly focus on improving the second, while neglecting
that the two parts are highly correlated. This paper proposes a unified
end-to-end information extraction framework from visually rich documents, where
text reading and information extraction can reinforce each other via a
well-designed multi-modal context block. Specifically, the text reading part
provides multi-modal features like visual, textual and layout features. The
multi-modal context block is developed to fuse the generated multi-modal
features and even the prior knowledge from the pre-trained language model for
better semantic representation. The information extraction part is responsible
for generating key contents with the fused context features. The framework can
be trained in an end-to-end trainable manner, achieving global optimization.
What is more, we define and group visually rich documents into four categories
across two dimensions, the layout and text type. For each document category, we
provide or recommend the corresponding benchmarks, experimental settings and
strong baselines for remedying the problem that this research area lacks the
uniform evaluation standard. Extensive experiments on four kinds of benchmarks
(from fixed layout to variable layout, from full-structured text to
semi-unstructured text) are reported, demonstrating the proposed method's
effectiveness. Data, source code and models are available
Neuron-level dynamics of oscillatory network structure and markerless tracking of kinematics during grasping
Oscillatory synchrony is proposed to play an important role in flexible sensory-motor transformations. Thereby, it is assumed that changes in the oscillatory network structure at the level of single neurons lead to flexible information processing. Yet, how the oscillatory network structure at the neuron-level changes with different behavior remains elusive. To address this gap, we examined changes in the fronto-parietal oscillatory network structure at the neuron-level, while monkeys performed a flexible sensory-motor grasping task. We found that neurons formed separate subnetworks in the low frequency and beta bands. The beta subnetwork was active during steady states and the low frequency network during active states of the task, suggesting that both frequencies are mutually exclusive at the neuron-level. Furthermore, both frequency subnetworks reconfigured at the neuron-level for different grip and context conditions, which was mostly lost at any scale larger than neurons in the network. Our results, therefore, suggest that the oscillatory network structure at the neuron-level meets the necessary requirements for the coordination of flexible sensory-motor transformations. Supplementarily, tracking hand kinematics is a crucial experimental requirement to analyze neuronal control of grasp movements. To this end, a 3D markerless, gloveless hand tracking system was developed using computer vision and deep learning techniques. 2021-11-3
A Survey on Reservoir Computing and its Interdisciplinary Applications Beyond Traditional Machine Learning
Reservoir computing (RC), first applied to temporal signal processing, is a
recurrent neural network in which neurons are randomly connected. Once
initialized, the connection strengths remain unchanged. Such a simple structure
turns RC into a non-linear dynamical system that maps low-dimensional inputs
into a high-dimensional space. The model's rich dynamics, linear separability,
and memory capacity then enable a simple linear readout to generate adequate
responses for various applications. RC spans areas far beyond machine learning,
since it has been shown that the complex dynamics can be realized in various
physical hardware implementations and biological devices. This yields greater
flexibility and shorter computation time. Moreover, the neuronal responses
triggered by the model's dynamics shed light on understanding brain mechanisms
that also exploit similar dynamical processes. While the literature on RC is
vast and fragmented, here we conduct a unified review of RC's recent
developments from machine learning to physics, biology, and neuroscience. We
first review the early RC models, and then survey the state-of-the-art models
and their applications. We further introduce studies on modeling the brain's
mechanisms by RC. Finally, we offer new perspectives on RC development,
including reservoir design, coding frameworks unification, physical RC
implementations, and interaction between RC, cognitive neuroscience and
evolution.Comment: 51 pages, 19 figures, IEEE Acces
- …