45,539 research outputs found
An Attention-based Collaboration Framework for Multi-View Network Representation Learning
Learning distributed node representations in networks has been attracting
increasing attention recently due to its effectiveness in a variety of
applications. Existing approaches usually study networks with a single type of
proximity between nodes, which defines a single view of a network. However, in
reality there usually exists multiple types of proximities between nodes,
yielding networks with multiple views. This paper studies learning node
representations for networks with multiple views, which aims to infer robust
node representations across different views. We propose a multi-view
representation learning approach, which promotes the collaboration of different
views and lets them vote for the robust representations. During the voting
process, an attention mechanism is introduced, which enables each node to focus
on the most informative views. Experimental results on real-world networks show
that the proposed approach outperforms existing state-of-the-art approaches for
network representation learning with a single view and other competitive
approaches with multiple views.Comment: CIKM 201
Deep Attributes Driven Multi-Camera Person Re-identification
The visual appearance of a person is easily affected by many factors like
pose variations, viewpoint changes and camera parameter differences. This makes
person Re-Identification (ReID) among multiple cameras a very challenging task.
This work is motivated to learn mid-level human attributes which are robust to
such visual appearance variations. And we propose a semi-supervised attribute
learning framework which progressively boosts the accuracy of attributes only
using a limited number of labeled data. Specifically, this framework involves a
three-stage training. A deep Convolutional Neural Network (dCNN) is first
trained on an independent dataset labeled with attributes. Then it is
fine-tuned on another dataset only labeled with person IDs using our defined
triplet loss. Finally, the updated dCNN predicts attribute labels for the
target dataset, which is combined with the independent dataset for the final
round of fine-tuning. The predicted attributes, namely \emph{deep attributes}
exhibit superior generalization ability across different datasets. By directly
using the deep attributes with simple Cosine distance, we have obtained
surprisingly good accuracy on four person ReID datasets. Experiments also show
that a simple metric learning modular further boosts our method, making it
significantly outperform many recent works.Comment: Person Re-identification; 17 pages; 5 figures; In IEEE ECCV 201
- …