13,659 research outputs found
Optimizing Memory Efficiency for Convolution Kernels on Kepler GPUs
Convolution is a fundamental operation in many applications, such as computer
vision, natural language processing, image processing, etc. Recent successes of
convolutional neural networks in various deep learning applications put even
higher demand on fast convolution. The high computation throughput and memory
bandwidth of graphics processing units (GPUs) make GPUs a natural choice for
accelerating convolution operations. However, maximally exploiting the
available memory bandwidth of GPUs for convolution is a challenging task. This
paper introduces a general model to address the mismatch between the memory
bank width of GPUs and computation data width of threads. Based on this model,
we develop two convolution kernels, one for the general case and the other for
a special case with one input channel. By carefully optimizing memory access
patterns and computation patterns, we design a communication-optimized kernel
for the special case and a communication-reduced kernel for the general case.
Experimental data based on implementations on Kepler GPUs show that our kernels
achieve 5.16X and 35.5% average performance improvement over the latest cuDNN
library, for the special case and the general case, respectively
EAVL: Explicitly Align Vision and Language for Referring Image Segmentation
Referring image segmentation aims to segment an object mentioned in natural
language from an image. A main challenge is language-related localization,
which means locating the object with the relevant language. Previous approaches
mainly focus on the fusion of vision and language features without fully
addressing language-related localization. In previous approaches, fused
vision-language features are directly fed into a decoder and pass through a
convolution with a fixed kernel to obtain the result, which follows a similar
pattern as traditional image segmentation. This approach does not explicitly
align language and vision features in the segmentation stage, resulting in a
suboptimal language-related localization. Different from previous methods, we
propose Explicitly Align the Vision and Language for Referring Image
Segmentation (EAVL). Instead of using a fixed convolution kernel, we propose an
Aligner which explicitly aligns the vision and language features in the
segmentation stage. Specifically, a series of unfixed convolution kernels are
generated based on the input l, and then are use to explicitly align the vision
and language features. To achieve this, We generate multiple queries that
represent different emphases of the language expression. These queries are
transformed into a series of query-based convolution kernels. Then, we utilize
these kernels to do convolutions in the segmentation stage and obtain a series
of segmentation masks. The final result is obtained through the aggregation of
all masks. Our method can not only fuse vision and language features
effectively but also exploit their potential in the segmentation stage. And
most importantly, we explicitly align language features of different emphases
with the image features to achieve language-related localization. Our method
surpasses previous state-of-the-art methods on RefCOCO, RefCOCO+, and G-Ref by
large margins.Comment: 10 pages, 4 figures. arXiv admin note: text overlap with
arXiv:2305.1496
Structured lexical similarity via convolution Kernels on dependency trees
A central topic in natural language process-ing is the design of lexical and syntactic fea-tures suitable for the target application. In this paper, we study convolution dependency tree kernels for automatic engineering of syntactic and semantic patterns exploiting lexical simi-larities. We define efficient and powerful ker-nels for measuring the similarity between de-pendency structures, whose surface forms of the lexical nodes are in part or completely dif-ferent. The experiments with such kernels for question classification show an unprecedented results, e.g. 41 % of error reduction of the for-mer state-of-the-art. Additionally, semantic role classification confirms the benefit of se-mantic smoothing for dependency kernels.
Towards Language-guided Visual Recognition via Dynamic Convolutions
In this paper, we are committed to establishing an unified and end-to-end
multi-modal network via exploring the language-guided visual recognition. To
approach this target, we first propose a novel multi-modal convolution module
called Language-dependent Convolution (LaConv). Its convolution kernels are
dynamically generated based on natural language information, which can help
extract differentiated visual features for different multi-modal examples.
Based on the LaConv module, we further build the first fully language-driven
convolution network, termed as LaConvNet, which can unify the visual
recognition and multi-modal reasoning in one forward structure. To validate
LaConv and LaConvNet, we conduct extensive experiments on four benchmark
datasets of two vision-and-language tasks, i.e., visual question answering
(VQA) and referring expression comprehension (REC). The experimental results
not only shows the performance gains of LaConv compared to the existing
multi-modal modules, but also witness the merits of LaConvNet as an unified
network, including compact network, high generalization ability and excellent
performance, e.g., +4.7% on RefCOCO+
Identifying high-impact sub-structures for convolution kernels in document-level sentiment classification
Convolution kernels support the modeling of complex syntactic information in machine-learning tasks. However, such models are highly sensitive to the type and size of syntactic structure used. It is therefore an important challenge to automatically identify high impact sub-structures relevant to a given task. In this paper we present a systematic study investigating (combinations of) sequence and convolution kernels using different types of substructures in document-level sentiment classification. We show that minimal sub-structures extracted from constituency and dependency trees guided by a polarity lexicon show 1.45 point absolute improvement in accuracy over a bag-of-words classifier on a widely used sentiment corpus
PCNNA: A Photonic Convolutional Neural Network Accelerator
Convolutional Neural Networks (CNN) have been the centerpiece of many
applications including but not limited to computer vision, speech processing,
and Natural Language Processing (NLP). However, the computationally expensive
convolution operations impose many challenges to the performance and
scalability of CNNs. In parallel, photonic systems, which are traditionally
employed for data communication, have enjoyed recent popularity for data
processing due to their high bandwidth, low power consumption, and
reconfigurability. Here we propose a Photonic Convolutional Neural Network
Accelerator (PCNNA) as a proof of concept design to speedup the convolution
operation for CNNs. Our design is based on the recently introduced silicon
photonic microring weight banks, which use broadcast-and-weight protocol to
perform Multiply And Accumulate (MAC) operation and move data through layers of
a neural network. Here, we aim to exploit the synergy between the inherent
parallelism of photonics in the form of Wavelength Division Multiplexing (WDM)
and sparsity of connections between input feature maps and kernels in CNNs.
While our full system design offers up to more than 3 orders of magnitude
speedup in execution time, its optical core potentially offers more than 5
order of magnitude speedup compared to state-of-the-art electronic
counterparts.Comment: 5 Pages, 6 Figures, IEEE SOCC 201
Distributed Tree Kernels
In this paper, we propose the distributed tree kernels (DTK) as a novel
method to reduce time and space complexity of tree kernels. Using a linear
complexity algorithm to compute vectors for trees, we embed feature spaces of
tree fragments in low-dimensional spaces where the kernel computation is
directly done with dot product. We show that DTKs are faster, correlate with
tree kernels, and obtain a statistically similar performance in two natural
language processing tasks.Comment: ICML201
- ā¦