20,082 research outputs found
A Fully Attention-Based Information Retriever
Recurrent neural networks are now the state-of-the-art in natural language
processing because they can build rich contextual representations and process
texts of arbitrary length. However, recent developments on attention mechanisms
have equipped feedforward networks with similar capabilities, hence enabling
faster computations due to the increase in the number of operations that can be
parallelized. We explore this new type of architecture in the domain of
question-answering and propose a novel approach that we call Fully Attention
Based Information Retriever (FABIR). We show that FABIR achieves competitive
results in the Stanford Question Answering Dataset (SQuAD) while having fewer
parameters and being faster at both learning and inference than rival methods.Comment: Accepted for presentation at the International Joint Conference on
Neural Networks (IJCNN) 201
Non-local Neural Networks
Both convolutional and recurrent operations are building blocks that process
one local neighborhood at a time. In this paper, we present non-local
operations as a generic family of building blocks for capturing long-range
dependencies. Inspired by the classical non-local means method in computer
vision, our non-local operation computes the response at a position as a
weighted sum of the features at all positions. This building block can be
plugged into many computer vision architectures. On the task of video
classification, even without any bells and whistles, our non-local models can
compete or outperform current competition winners on both Kinetics and Charades
datasets. In static image recognition, our non-local models improve object
detection/segmentation and pose estimation on the COCO suite of tasks. Code is
available at https://github.com/facebookresearch/video-nonlocal-net .Comment: CVPR 2018, code is available at:
https://github.com/facebookresearch/video-nonlocal-ne
Distance Guided Channel Weighting for Semantic Segmentation
Recent works have achieved great success in improving the performance of
multiple computer vision tasks by capturing features with a high channel number
utilizing deep neural networks. However, many channels of extracted features
are not discriminative and contain a lot of redundant information. In this
paper, we address above issue by introducing the Distance Guided Channel
Weighting (DGCW) Module. The DGCW module is constructed in a pixel-wise context
extraction manner, which enhances the discriminativeness of features by
weighting different channels of each pixel's feature vector when modeling its
relationship with other pixels. It can make full use of the high-discriminative
information while ignore the low-discriminative information containing in
feature maps, as well as capture the long-range dependencies. Furthermore, by
incorporating the DGCW module with a baseline segmentation network, we propose
the Distance Guided Channel Weighting Network (DGCWNet). We conduct extensive
experiments to demonstrate the effectiveness of DGCWNet. In particular, it
achieves 81.6% mIoU on Cityscapes with only fine annotated data for training,
and also gains satisfactory performance on another two semantic segmentation
datasets, i.e. Pascal Context and ADE20K. Code will be available soon at
https://github.com/LanyunZhu/DGCWNet
- …