11,864 research outputs found
Improving Person Re-identification by Attribute and Identity Learning
Person re-identification (re-ID) and attribute recognition share a common
target at learning pedestrian descriptions. Their difference consists in the
granularity. Most existing re-ID methods only take identity labels of
pedestrians into consideration. However, we find the attributes, containing
detailed local descriptions, are beneficial in allowing the re-ID model to
learn more discriminative feature representations. In this paper, based on the
complementarity of attribute labels and ID labels, we propose an
attribute-person recognition (APR) network, a multi-task network which learns a
re-ID embedding and at the same time predicts pedestrian attributes. We
manually annotate attribute labels for two large-scale re-ID datasets, and
systematically investigate how person re-ID and attribute recognition benefit
from each other. In addition, we re-weight the attribute predictions
considering the dependencies and correlations among the attributes. The
experimental results on two large-scale re-ID benchmarks demonstrate that by
learning a more discriminative representation, APR achieves competitive re-ID
performance compared with the state-of-the-art methods. We use APR to speed up
the retrieval process by ten times with a minor accuracy drop of 2.92% on
Market-1501. Besides, we also apply APR on the attribute recognition task and
demonstrate improvement over the baselines.Comment: Accepted to Pattern Recognition (PR
Dual Long Short-Term Memory Networks for Sub-Character Representation Learning
Characters have commonly been regarded as the minimal processing unit in
Natural Language Processing (NLP). But many non-latin languages have
hieroglyphic writing systems, involving a big alphabet with thousands or
millions of characters. Each character is composed of even smaller parts, which
are often ignored by the previous work. In this paper, we propose a novel
architecture employing two stacked Long Short-Term Memory Networks (LSTMs) to
learn sub-character level representation and capture deeper level of semantic
meanings. To build a concrete study and substantiate the efficiency of our
neural architecture, we take Chinese Word Segmentation as a research case
example. Among those languages, Chinese is a typical case, for which every
character contains several components called radicals. Our networks employ a
shared radical level embedding to solve both Simplified and Traditional Chinese
Word Segmentation, without extra Traditional to Simplified Chinese conversion,
in such a highly end-to-end way the word segmentation can be significantly
simplified compared to the previous work. Radical level embeddings can also
capture deeper semantic meaning below character level and improve the system
performance of learning. By tying radical and character embeddings together,
the parameter count is reduced whereas semantic knowledge is shared and
transferred between two levels, boosting the performance largely. On 3 out of 4
Bakeoff 2005 datasets, our method surpassed state-of-the-art results by up to
0.4%. Our results are reproducible, source codes and corpora are available on
GitHub.Comment: Accepted & forthcoming at ITNG-201
- …