496 research outputs found
EleAtt-RNN: Adding Attentiveness to Neurons in Recurrent Neural Networks
Recurrent neural networks (RNNs) are capable of modeling temporal
dependencies of complex sequential data. In general, current available
structures of RNNs tend to concentrate on controlling the contributions of
current and previous information. However, the exploration of different
importance levels of different elements within an input vector is always
ignored. We propose a simple yet effective Element-wise-Attention Gate
(EleAttG), which can be easily added to an RNN block (e.g. all RNN neurons in
an RNN layer), to empower the RNN neurons to have attentiveness capability. For
an RNN block, an EleAttG is used for adaptively modulating the input by
assigning different levels of importance, i.e., attention, to each
element/dimension of the input. We refer to an RNN block equipped with an
EleAttG as an EleAtt-RNN block. Instead of modulating the input as a whole, the
EleAttG modulates the input at fine granularity, i.e., element-wise, and the
modulation is content adaptive. The proposed EleAttG, as an additional
fundamental unit, is general and can be applied to any RNN structures, e.g.,
standard RNN, Long Short-Term Memory (LSTM), or Gated Recurrent Unit (GRU). We
demonstrate the effectiveness of the proposed EleAtt-RNN by applying it to
different tasks including the action recognition, from both skeleton-based data
and RGB videos, gesture recognition, and sequential MNIST classification.
Experiments show that adding attentiveness through EleAttGs to RNN blocks
significantly improves the power of RNNs.Comment: IEEE Transactions on Image Processing (Accept). arXiv admin note:
substantial text overlap with arXiv:1807.0444
LSTA: Long Short-Term Attention for Egocentric Action Recognition
Egocentric activity recognition is one of the most challenging tasks in video
analysis. It requires a fine-grained discrimination of small objects and their
manipulation. While some methods base on strong supervision and attention
mechanisms, they are either annotation consuming or do not take spatio-temporal
patterns into account. In this paper we propose LSTA as a mechanism to focus on
features from spatial relevant parts while attention is being tracked smoothly
across the video sequence. We demonstrate the effectiveness of LSTA on
egocentric activity recognition with an end-to-end trainable two-stream
architecture, achieving state of the art performance on four standard
benchmarks.Comment: Accepted to CVPR 201
LSTA: Long Short-Term Attention for Egocentric Action Recognition
Egocentric activity recognition is one of the most challenging tasks in video analysis. It requires a fine-grained discrimination of small objects and their manipulation. While some methods base on strong supervision and attention mechanisms, they are either annotation consuming or do not take spatio-temporal patterns into account. In this paper we propose LSTA as a mechanism to focus on features from spatial relevant parts while attention is being tracked smoothly across the video sequence. We demonstrate the effectiveness of LSTA on egocentric activity recognition with an end-to-end trainable two-stream architecture, achieving state-of-the-art performance on four standard benchmarks
Deep Learning Development Environment in Virtual Reality
Virtual reality (VR) offers immersive visualization and intuitive
interaction. We leverage VR to enable any biomedical professional to deploy a
deep learning (DL) model for image classification. While DL models can be
powerful tools for data analysis, they are also challenging to understand and
develop. To make deep learning more accessible and intuitive, we have built a
virtual reality-based DL development environment. Within our environment, the
user can move tangible objects to construct a neural network only using their
hands. Our software automatically translates these configurations into a
trainable model and then reports its resulting accuracy on a test dataset in
real-time. Furthermore, we have enriched the virtual objects with
visualizations of the model's components such that users can achieve insight
about the DL models that they are developing. With this approach, we bridge the
gap between professionals in different fields of expertise while offering a
novel perspective for model analysis and data interaction. We further suggest
that techniques of development and visualization in deep learning can benefit
by integrating virtual reality
- …