114,641 research outputs found
Residual Attention Network for Image Classification
In this work, we propose "Residual Attention Network", a convolutional neural
network using attention mechanism which can incorporate with state-of-art feed
forward network architecture in an end-to-end training fashion. Our Residual
Attention Network is built by stacking Attention Modules which generate
attention-aware features. The attention-aware features from different modules
change adaptively as layers going deeper. Inside each Attention Module,
bottom-up top-down feedforward structure is used to unfold the feedforward and
feedback attention process into a single feedforward process. Importantly, we
propose attention residual learning to train very deep Residual Attention
Networks which can be easily scaled up to hundreds of layers. Extensive
analyses are conducted on CIFAR-10 and CIFAR-100 datasets to verify the
effectiveness of every module mentioned above. Our Residual Attention Network
achieves state-of-the-art object recognition performance on three benchmark
datasets including CIFAR-10 (3.90% error), CIFAR-100 (20.45% error) and
ImageNet (4.8% single model and single crop, top-5 error). Note that, our
method achieves 0.6% top-1 accuracy improvement with 46% trunk depth and 69%
forward FLOPs comparing to ResNet-200. The experiment also demonstrates that
our network is robust against noisy labels.Comment: accepted to CVPR201
Planning assistance for the NASA 30/20 GHz program. Network control architecture study.
Network Control Architecture for a 30/20 GHz flight experiment system operating in the Time Division Multiple Access (TDMA) was studied. Architecture development, identification of processing functions, and performance requirements for the Master Control Station (MCS), diversity trunking stations, and Customer Premises Service (CPS) stations are covered. Preliminary hardware and software processing requirements as well as budgetary cost estimates for the network control system are given. For the trunking system control, areas covered include on board SS-TDMA switch organization, frame structure, acquisition and synchronization, channel assignment, fade detection and adaptive power control, on board oscillator control, and terrestrial network timing. For the CPS control, they include on board processing and adaptive forward error correction control
VoxCeleb2: Deep Speaker Recognition
The objective of this paper is speaker recognition under noisy and
unconstrained conditions.
We make two key contributions. First, we introduce a very large-scale
audio-visual speaker recognition dataset collected from open-source media.
Using a fully automated pipeline, we curate VoxCeleb2 which contains over a
million utterances from over 6,000 speakers. This is several times larger than
any publicly available speaker recognition dataset.
Second, we develop and compare Convolutional Neural Network (CNN) models and
training strategies that can effectively recognise identities from voice under
various conditions. The models trained on the VoxCeleb2 dataset surpass the
performance of previous works on a benchmark dataset by a significant margin.Comment: To appear in Interspeech 2018. The audio-visual dataset can be
downloaded from http://www.robots.ox.ac.uk/~vgg/data/voxceleb2 .
1806.05622v2: minor fixes; 5 page
Advanced LSTM: A Study about Better Time Dependency Modeling in Emotion Recognition
Long short-term memory (LSTM) is normally used in recurrent neural network
(RNN) as basic recurrent unit. However,conventional LSTM assumes that the state
at current time step depends on previous time step. This assumption constraints
the time dependency modeling capability. In this study, we propose a new
variation of LSTM, advanced LSTM (A-LSTM), for better temporal context
modeling. We employ A-LSTM in weighted pooling RNN for emotion recognition. The
A-LSTM outperforms the conventional LSTM by 5.5% relatively. The A-LSTM based
weighted pooling RNN can also complement the state-of-the-art emotion
classification framework. This shows the advantage of A-LSTM
Learning Bodily and Temporal Attention in Protective Movement Behavior Detection
For people with chronic pain, the assessment of protective behavior during
physical functioning is essential to understand their subjective pain-related
experiences (e.g., fear and anxiety toward pain and injury) and how they deal
with such experiences (avoidance or reliance on specific body joints), with the
ultimate goal of guiding intervention. Advances in deep learning (DL) can
enable the development of such intervention. Using the EmoPain MoCap dataset,
we investigate how attention-based DL architectures can be used to improve the
detection of protective behavior by capturing the most informative temporal and
body configurational cues characterizing specific movements and the strategies
used to perform them. We propose an end-to-end deep learning architecture named
BodyAttentionNet (BANet). BANet is designed to learn temporal and bodily parts
that are more informative to the detection of protective behavior. The approach
addresses the variety of ways people execute a movement (including healthy
people) independently of the type of movement analyzed. Through extensive
comparison experiments with other state-of-the-art machine learning techniques
used with motion capture data, we show statistically significant improvements
achieved by using these attention mechanisms. In addition, the BANet
architecture requires a much lower number of parameters than the state of the
art for comparable if not higher performances.Comment: 7 pages, 3 figures, 2 tables, code available, accepted in ACII 201
- …
