1,970 research outputs found
What-and-Where to Match: Deep Spatially Multiplicative Integration Networks for Person Re-identification
Matching pedestrians across disjoint camera views, known as person
re-identification (re-id), is a challenging problem that is of importance to
visual recognition and surveillance. Most existing methods exploit local
regions within spatial manipulation to perform matching in local
correspondence. However, they essentially extract \emph{fixed} representations
from pre-divided regions for each image and perform matching based on the
extracted representation subsequently. For models in this pipeline, local finer
patterns that are crucial to distinguish positive pairs from negative ones
cannot be captured, and thus making them underperformed. In this paper, we
propose a novel deep multiplicative integration gating function, which answers
the question of \emph{what-and-where to match} for effective person re-id. To
address \emph{what} to match, our deep network emphasizes common local patterns
by learning joint representations in a multiplicative way. The network
comprises two Convolutional Neural Networks (CNNs) to extract convolutional
activations, and generates relevant descriptors for pedestrian matching. This
thus, leads to flexible representations for pair-wise images. To address
\emph{where} to match, we combat the spatial misalignment by performing
spatially recurrent pooling via a four-directional recurrent neural network to
impose spatial dependency over all positions with respect to the entire image.
The proposed network is designed to be end-to-end trainable to characterize
local pairwise feature interactions in a spatially aligned manner. To
demonstrate the superiority of our method, extensive experiments are conducted
over three benchmark data sets: VIPeR, CUHK03 and Market-1501.Comment: Published at Pattern Recognition, Elsevie
Deep Learning for Single Image Super-Resolution: A Brief Review
Single image super-resolution (SISR) is a notoriously challenging ill-posed
problem, which aims to obtain a high-resolution (HR) output from one of its
low-resolution (LR) versions. To solve the SISR problem, recently powerful deep
learning algorithms have been employed and achieved the state-of-the-art
performance. In this survey, we review representative deep learning-based SISR
methods, and group them into two categories according to their major
contributions to two essential aspects of SISR: the exploration of efficient
neural network architectures for SISR, and the development of effective
optimization objectives for deep SISR learning. For each category, a baseline
is firstly established and several critical limitations of the baseline are
summarized. Then representative works on overcoming these limitations are
presented based on their original contents as well as our critical
understandings and analyses, and relevant comparisons are conducted from a
variety of perspectives. Finally we conclude this review with some vital
current challenges and future trends in SISR leveraging deep learning
algorithms.Comment: Accepted by IEEE Transactions on Multimedia (TMM
Learning long-range spatial dependencies with horizontal gated-recurrent units
Progress in deep learning has spawned great successes in many engineering
applications. As a prime example, convolutional neural networks, a type of
feedforward neural networks, are now approaching -- and sometimes even
surpassing -- human accuracy on a variety of visual recognition tasks. Here,
however, we show that these neural networks and their recent extensions
struggle in recognition tasks where co-dependent visual features must be
detected over long spatial ranges. We introduce the horizontal gated-recurrent
unit (hGRU) to learn intrinsic horizontal connections -- both within and across
feature columns. We demonstrate that a single hGRU layer matches or outperforms
all tested feedforward hierarchical baselines including state-of-the-art
architectures which have orders of magnitude more free parameters. We further
discuss the biological plausibility of the hGRU in comparison to anatomical
data from the visual cortex as well as human behavioral data on a classic
contour detection task.Comment: Published at NeurIPS 2018
https://papers.nips.cc/paper/7300-learning-long-range-spatial-dependencies-with-horizontal-gated-recurrent-unit
Cross-lingual alignments of ELMo contextual embeddings
Building machine learning prediction models for a specific NLP task requires
sufficient training data, which can be difficult to obtain for less-resourced
languages. Cross-lingual embeddings map word embeddings from a less-resourced
language to a resource-rich language so that a prediction model trained on data
from the resource-rich language can also be used in the less-resourced
language. To produce cross-lingual mappings of recent contextual embeddings,
anchor points between the embedding spaces have to be words in the same
context. We address this issue with a novel method for creating cross-lingual
contextual alignment datasets. Based on that, we propose several cross-lingual
mapping methods for ELMo embeddings. The proposed linear mapping methods use
existing Vecmap and MUSE alignments on contextual ELMo embeddings. Novel
nonlinear ELMoGAN mapping methods are based on GANs and do not assume
isomorphic embedding spaces. We evaluate the proposed mapping methods on nine
languages, using four downstream tasks: named entity recognition (NER),
dependency parsing (DP), terminology alignment, and sentiment analysis. The
ELMoGAN methods perform very well on the NER and terminology alignment tasks,
with a lower cross-lingual loss for NER compared to the direct training on some
languages. In DP and sentiment analysis, linear contextual alignment variants
are more successful.Comment: 30 pages, 5 figure
- …