63 research outputs found
Memory-Augmented Temporal Dynamic Learning for Action Recognition
Human actions captured in video sequences contain two crucial factors for
action recognition, i.e., visual appearance and motion dynamics. To model these
two aspects, Convolutional and Recurrent Neural Networks (CNNs and RNNs) are
adopted in most existing successful methods for recognizing actions. However,
CNN based methods are limited in modeling long-term motion dynamics. RNNs are
able to learn temporal motion dynamics but lack effective ways to tackle
unsteady dynamics in long-duration motion. In this work, we propose a
memory-augmented temporal dynamic learning network, which learns to write the
most evident information into an external memory module and ignore irrelevant
ones. In particular, we present a differential memory controller to make a
discrete decision on whether the external memory module should be updated with
current feature. The discrete memory controller takes in the memory history,
context embedding and current feature as inputs and controls information flow
into the external memory module. Additionally, we train this discrete memory
controller using straight-through estimator. We evaluate this end-to-end system
on benchmark datasets (UCF101 and HMDB51) of human action recognition. The
experimental results show consistent improvements on both datasets over prior
works and our baselines.Comment: The Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19
Representation Learning in Deep RL via Discrete Information Bottleneck
Several self-supervised representation learning methods have been proposed
for reinforcement learning (RL) with rich observations. For real-world
applications of RL, recovering underlying latent states is crucial,
particularly when sensory inputs contain irrelevant and exogenous information.
In this work, we study how information bottlenecks can be used to construct
latent states efficiently in the presence of task-irrelevant information. We
propose architectures that utilize variational and discrete information
bottlenecks, coined as RepDIB, to learn structured factorized representations.
Exploiting the expressiveness bought by factorized representations, we
introduce a simple, yet effective, bottleneck that can be integrated with any
existing self-supervised objective for RL. We demonstrate this across several
online and offline RL benchmarks, along with a real robot arm task, where we
find that compressed representations with RepDIB can lead to strong performance
improvements, as the learned bottlenecks help predict only the relevant state
while ignoring irrelevant information
A hierarchical attention network-based approach for depression detection from transcribed clinical interviews
The high prevalence of depression in society has given rise to a need for new digital tools that can aid its early detection. Among other effects, depression impacts the use of language. Seeking to exploit this, this work focuses on the detection of depressed and non-depressed individuals through the analysis of linguistic information extracted from transcripts of clinical interviews with a virtual agent. Specifically, we investigated the advantages of employing hierarchical attention-based networks for this task. Using Global Vectors (GloVe) pretrained word embedding models to extract low-level representations of the words, we compared hierarchical local-global attention networks and hierarchical contextual attention networks. We performed our experiments on the Distress Analysis Interview Corpus - Wizard of Oz (DAIC-WoZ) dataset, which contains audio, visual, and linguistic information acquired from participants during a clinical session. Our results using the DAIC-WoZ test set indicate that hierarchical contextual attention networks are the most suitable configuration to detect depression from transcripts. The configuration achieves an Unweighted Average Recall (UAR) of .66 using the test set, surpassing our baseline, a Recurrent Neural Network that does not use attention.Funding by EU- sustAGE (826506), EU-RADAR-CNS (115902), Key Program of the Natural Science Foundation of Tianjin, CHINA (18JCZDJC36300) and BMW Group Research
Pages 221-225
https://www.isca-speech.org/archive/Interspeech_2019/index.htm
Unified Contrastive Fusion Transformer for Multimodal Human Action Recognition
Various types of sensors have been considered to develop human action
recognition (HAR) models. Robust HAR performance can be achieved by fusing
multimodal data acquired by different sensors. In this paper, we introduce a
new multimodal fusion architecture, referred to as Unified Contrastive Fusion
Transformer (UCFFormer) designed to integrate data with diverse distributions
to enhance HAR performance. Based on the embedding features extracted from each
modality, UCFFormer employs the Unified Transformer to capture the
inter-dependency among embeddings in both time and modality domains. We present
the Factorized Time-Modality Attention to perform self-attention efficiently
for the Unified Transformer. UCFFormer also incorporates contrastive learning
to reduce the discrepancy in feature distributions across various modalities,
thus generating semantically aligned features for information fusion.
Performance evaluation conducted on two popular datasets, UTD-MHAD and NTU
RGB+D, demonstrates that UCFFormer achieves state-of-the-art performance,
outperforming competing methods by considerable margins
Gesture Recognition in Robotic Surgery with Multimodal Attention
Automatically recognising surgical gestures from surgical data is an important building block of automated activity recognition and analytics, technical skill assessment, intra-operative assistance and eventually robotic automation. The complexity of articulated instrument trajectories and the inherent variability due to surgical style and patient anatomy make analysis and fine-grained segmentation of surgical motion patterns from robot kinematics alone very difficult. Surgical video provides crucial information from the surgical site with context for the kinematic data and the interaction between the instruments and tissue. Yet sensor fusion between the robot data and surgical video stream is non-trivial because the data have different frequency, dimensions and discriminative capability. In this paper, we integrate multimodal attention mechanisms in a two-stream temporal convolutional network to compute relevance scores and weight kinematic and visual feature representations dynamically in time, aiming to aid multimodal network training and achieve effective sensor fusion. We report the results of our system on the JIGSAWS benchmark dataset and on a new in vivo dataset of suturing segments from robotic prostatectomy procedures. Our results are promising and obtain multimodal prediction sequences with higher accuracy and better temporal structure than corresponding unimodal solutions. Visualization of attention scores also gives physically interpretable insights on network understanding of strengths and weaknesses of each sensor
- …