21,886 research outputs found
Axiomatic Construction of Hierarchical Clustering in Asymmetric Networks
This paper considers networks where relationships between nodes are
represented by directed dissimilarities. The goal is to study methods for the
determination of hierarchical clusters, i.e., a family of nested partitions
indexed by a connectivity parameter, induced by the given dissimilarity
structures. Our construction of hierarchical clustering methods is based on
defining admissible methods to be those methods that abide by the axioms of
value - nodes in a network with two nodes are clustered together at the maximum
of the two dissimilarities between them - and transformation - when
dissimilarities are reduced, the network may become more clustered but not
less. Several admissible methods are constructed and two particular methods,
termed reciprocal and nonreciprocal clustering, are shown to provide upper and
lower bounds in the space of admissible methods. Alternative clustering
methodologies and axioms are further considered. Allowing the outcome of
hierarchical clustering to be asymmetric, so that it matches the asymmetry of
the original data, leads to the inception of quasi-clustering methods. The
existence of a unique quasi-clustering method is shown. Allowing clustering in
a two-node network to proceed at the minimum of the two dissimilarities
generates an alternative axiomatic construction. There is a unique clustering
method in this case too. The paper also develops algorithms for the computation
of hierarchical clusters using matrix powers on a min-max dioid algebra and
studies the stability of the methods proposed. We proved that most of the
methods introduced in this paper are such that similar networks yield similar
hierarchical clustering results. Algorithms are exemplified through their
application to networks describing internal migration within states of the
United States (U.S.) and the interrelation between sectors of the U.S. economy.Comment: This is a largely extended version of the previous conference
submission under the same title. The current version contains the material in
the previous version (published in ICASSP 2013) as well as material presented
at the Asilomar Conference on Signal, Systems, and Computers 2013, GlobalSIP
2013, and ICML 2014. Also, unpublished material is included in the current
versio
A network approach to topic models
One of the main computational and scientific challenges in the modern age is
to extract useful information from unstructured texts. Topic models are one
popular machine-learning approach which infers the latent topical structure of
a collection of documents. Despite their success --- in particular of its most
widely used variant called Latent Dirichlet Allocation (LDA) --- and numerous
applications in sociology, history, and linguistics, topic models are known to
suffer from severe conceptual and practical problems, e.g. a lack of
justification for the Bayesian priors, discrepancies with statistical
properties of real texts, and the inability to properly choose the number of
topics. Here we obtain a fresh view on the problem of identifying topical
structures by relating it to the problem of finding communities in complex
networks. This is achieved by representing text corpora as bipartite networks
of documents and words. By adapting existing community-detection methods --
using a stochastic block model (SBM) with non-parametric priors -- we obtain a
more versatile and principled framework for topic modeling (e.g., it
automatically detects the number of topics and hierarchically clusters both the
words and documents). The analysis of artificial and real corpora demonstrates
that our SBM approach leads to better topic models than LDA in terms of
statistical model selection. More importantly, our work shows how to formally
relate methods from community detection and topic modeling, opening the
possibility of cross-fertilization between these two fields.Comment: 22 pages, 10 figures, code available at https://topsbm.github.io
A Proximity-Aware Hierarchical Clustering of Faces
In this paper, we propose an unsupervised face clustering algorithm called
"Proximity-Aware Hierarchical Clustering" (PAHC) that exploits the local
structure of deep representations. In the proposed method, a similarity measure
between deep features is computed by evaluating linear SVM margins. SVMs are
trained using nearest neighbors of sample data, and thus do not require any
external training data. Clusters are then formed by thresholding the similarity
scores. We evaluate the clustering performance using three challenging
unconstrained face datasets, including Celebrity in Frontal-Profile (CFP),
IARPA JANUS Benchmark A (IJB-A), and JANUS Challenge Set 3 (JANUS CS3)
datasets. Experimental results demonstrate that the proposed approach can
achieve significant improvements over state-of-the-art methods. Moreover, we
also show that the proposed clustering algorithm can be applied to curate a set
of large-scale and noisy training dataset while maintaining sufficient amount
of images and their variations due to nuisance factors. The face verification
performance on JANUS CS3 improves significantly by finetuning a DCNN model with
the curated MS-Celeb-1M dataset which contains over three million face images
Element-centric clustering comparison unifies overlaps and hierarchy
Clustering is one of the most universal approaches for understanding complex
data. A pivotal aspect of clustering analysis is quantitatively comparing
clusterings; clustering comparison is the basis for many tasks such as
clustering evaluation, consensus clustering, and tracking the temporal
evolution of clusters. In particular, the extrinsic evaluation of clustering
methods requires comparing the uncovered clusterings to planted clusterings or
known metadata. Yet, as we demonstrate, existing clustering comparison measures
have critical biases which undermine their usefulness, and no measure
accommodates both overlapping and hierarchical clusterings. Here we unify the
comparison of disjoint, overlapping, and hierarchically structured clusterings
by proposing a new element-centric framework: elements are compared based on
the relationships induced by the cluster structure, as opposed to the
traditional cluster-centric philosophy. We demonstrate that, in contrast to
standard clustering similarity measures, our framework does not suffer from
critical biases and naturally provides unique insights into how the clusterings
differ. We illustrate the strengths of our framework by revealing new insights
into the organization of clusters in two applications: the improved
classification of schizophrenia based on the overlapping and hierarchical
community structure of fMRI brain networks, and the disentanglement of various
social homophily factors in Facebook social networks. The universality of
clustering suggests far-reaching impact of our framework throughout all areas
of science
- …