22,016 research outputs found
Mind the Gap: Subspace based Hierarchical Domain Adaptation
Domain adaptation techniques aim at adapting a classifier learnt on a source
domain to work on the target domain. Exploiting the subspaces spanned by
features of the source and target domains respectively is one approach that has
been investigated towards solving this problem. These techniques normally
assume the existence of a single subspace for the entire source / target
domain. In this work, we consider the hierarchical organization of the data and
consider multiple subspaces for the source and target domain based on the
hierarchy. We evaluate different subspace based domain adaptation techniques
under this setting and observe that using different subspaces based on the
hierarchy yields consistent improvement over a non-hierarchical baselineComment: 4 pages in Second Workshop on Transfer and Multi-Task Learning:
Theory meets Practice in NIPS 201
A Proximity-Aware Hierarchical Clustering of Faces
In this paper, we propose an unsupervised face clustering algorithm called
"Proximity-Aware Hierarchical Clustering" (PAHC) that exploits the local
structure of deep representations. In the proposed method, a similarity measure
between deep features is computed by evaluating linear SVM margins. SVMs are
trained using nearest neighbors of sample data, and thus do not require any
external training data. Clusters are then formed by thresholding the similarity
scores. We evaluate the clustering performance using three challenging
unconstrained face datasets, including Celebrity in Frontal-Profile (CFP),
IARPA JANUS Benchmark A (IJB-A), and JANUS Challenge Set 3 (JANUS CS3)
datasets. Experimental results demonstrate that the proposed approach can
achieve significant improvements over state-of-the-art methods. Moreover, we
also show that the proposed clustering algorithm can be applied to curate a set
of large-scale and noisy training dataset while maintaining sufficient amount
of images and their variations due to nuisance factors. The face verification
performance on JANUS CS3 improves significantly by finetuning a DCNN model with
the curated MS-Celeb-1M dataset which contains over three million face images
Deep Adaptive Feature Embedding with Local Sample Distributions for Person Re-identification
Person re-identification (re-id) aims to match pedestrians observed by
disjoint camera views. It attracts increasing attention in computer vision due
to its importance to surveillance system. To combat the major challenge of
cross-view visual variations, deep embedding approaches are proposed by
learning a compact feature space from images such that the Euclidean distances
correspond to their cross-view similarity metric. However, the global Euclidean
distance cannot faithfully characterize the ideal similarity in a complex
visual feature space because features of pedestrian images exhibit unknown
distributions due to large variations in poses, illumination and occlusion.
Moreover, intra-personal training samples within a local range are robust to
guide deep embedding against uncontrolled variations, which however, cannot be
captured by a global Euclidean distance. In this paper, we study the problem of
person re-id by proposing a novel sampling to mine suitable \textit{positives}
(i.e. intra-class) within a local range to improve the deep embedding in the
context of large intra-class variations. Our method is capable of learning a
deep similarity metric adaptive to local sample structure by minimizing each
sample's local distances while propagating through the relationship between
samples to attain the whole intra-class minimization. To this end, a novel
objective function is proposed to jointly optimize similarity metric learning,
local positive mining and robust deep embedding. This yields local
discriminations by selecting local-ranged positive samples, and the learned
features are robust to dramatic intra-class variations. Experiments on
benchmarks show state-of-the-art results achieved by our method.Comment: Published on Pattern Recognitio
Graph Few-shot Learning via Knowledge Transfer
Towards the challenging problem of semi-supervised node classification, there
have been extensive studies. As a frontier, Graph Neural Networks (GNNs) have
aroused great interest recently, which update the representation of each node
by aggregating information of its neighbors. However, most GNNs have shallow
layers with a limited receptive field and may not achieve satisfactory
performance especially when the number of labeled nodes is quite small. To
address this challenge, we innovatively propose a graph few-shot learning (GFL)
algorithm that incorporates prior knowledge learned from auxiliary graphs to
improve classification accuracy on the target graph. Specifically, a
transferable metric space characterized by a node embedding and a
graph-specific prototype embedding function is shared between auxiliary graphs
and the target, facilitating the transfer of structural knowledge. Extensive
experiments and ablation studies on four real-world graph datasets demonstrate
the effectiveness of our proposed model.Comment: Full paper (with Appendix) of AAAI 202
On the use of clustering and the MeSH controlled vocabulary to improve MEDLINE abstract search
Databases of genomic documents contain substantial amounts of structured information in addition to the texts of titles and abstracts. Unstructured information retrieval techniques fail to take advantage of the structured information available. This paper describes a technique to
improve upon traditional retrieval methods by clustering the retrieval result set into two distinct clusters using additional structural information. Our hypothesis is that the relevant documents are to be found in the tightest cluster of the two, as suggested by van Rijsbergen's cluster
hypothesis. We present an experimental evaluation of these ideas based on the relevance judgments of the 2004 TREC workshop Genomics track, and the CLUTO software clustering
package
Shift Aggregate Extract Networks
We introduce an architecture based on deep hierarchical decompositions to
learn effective representations of large graphs. Our framework extends classic
R-decompositions used in kernel methods, enabling nested "part-of-part"
relations. Unlike recursive neural networks, which unroll a template on input
graphs directly, we unroll a neural network template over the decomposition
hierarchy, allowing us to deal with the high degree variability that typically
characterize social network graphs. Deep hierarchical decompositions are also
amenable to domain compression, a technique that reduces both space and time
complexity by exploiting symmetries. We show empirically that our approach is
competitive with current state-of-the-art graph classification methods,
particularly when dealing with social network datasets
Bayesian modeling of networks in complex business intelligence problems
Complex network data problems are increasingly common in many fields of
application. Our motivation is drawn from strategic marketing studies
monitoring customer choices of specific products, along with co-subscription
networks encoding multiple purchasing behavior. Data are available for several
agencies within the same insurance company, and our goal is to efficiently
exploit co-subscription networks to inform targeted advertising of cross-sell
strategies to currently mono-product customers. We address this goal by
developing a Bayesian hierarchical model, which clusters agencies according to
common mono-product customer choices and co-subscription networks. Within each
cluster, we efficiently model customer behavior via a cluster-dependent mixture
of latent eigenmodels. This formulation provides key information on
mono-product customer choices and multiple purchasing behavior within each
cluster, informing targeted cross-sell strategies. We develop simple algorithms
for tractable inference, and assess performance in simulations and an
application to business intelligence
- …