47 research outputs found
Clustering by compression
We present a new method for clustering based on compression. The method
doesn't use subject-specific features or background knowledge, and works as
follows: First, we determine a universal similarity distance, the normalized
compression distance or NCD, computed from the lengths of compressed data files
(singly and in pairwise concatenation). Second, we apply a hierarchical
clustering method. The NCD is universal in that it is not restricted to a
specific application area, and works across application area boundaries. A
theoretical precursor, the normalized information distance, co-developed by one
of the authors, is provably optimal but uses the non-computable notion of
Kolmogorov complexity. We propose precise notions of similarity metric, normal
compressor, and show that the NCD based on a normal compressor is a similarity
metric that approximates universality. To extract a hierarchy of clusters from
the distance matrix, we determine a dendrogram (binary tree) by a new quartet
method and a fast heuristic to implement it. The method is implemented and
available as public software, and is robust under choice of different
compressors. To substantiate our claims of universality and robustness, we
report evidence of successful application in areas as diverse as genomics,
virology, languages, literature, music, handwritten digits, astronomy, and
combinations of objects from completely different domains, using statistical,
dictionary, and block sorting compressors. In genomics we presented new
evidence for major questions in Mammalian evolution, based on
whole-mitochondrial genomic analysis: the Eutherian orders and the Marsupionta
hypothesis against the Theria hypothesis.Comment: LaTeX, 27 pages, 20 figure
The Google Similarity Distance
Words and phrases acquire meaning from the way they are used in society, from
their relative semantics to other words and phrases. For computers the
equivalent of `society' is `database,' and the equivalent of `use' is `way to
search the database.' We present a new theory of similarity between words and
phrases based on information distance and Kolmogorov complexity. To fix
thoughts we use the world-wide-web as database, and Google as search engine.
The method is also applicable to other search engines and databases. This
theory is then applied to construct a method to automatically extract
similarity, the Google similarity distance, of words and phrases from the
world-wide-web using Google page counts. The world-wide-web is the largest
database on earth, and the context information entered by millions of
independent users averages out to provide automatic semantics of useful
quality. We give applications in hierarchical clustering, classification, and
language translation. We give examples to distinguish between colors and
numbers, cluster names of paintings by 17th century Dutch masters and names of
books by English novelists, the ability to understand emergencies, and primes,
and we demonstrate the ability to do a simple automatic English-Spanish
translation. Finally, we use the WordNet database as an objective baseline
against which to judge the performance of our method. We conduct a massive
randomized trial in binary classification using support vector machines to
learn categories based on our Google distance, resulting in an a mean agreement
of 87% with the expert crafted WordNet categories.Comment: 15 pages, 10 figures; changed some text/figures/notation/part of
theorem. Incorporated referees comments. This is the final published version
up to some minor changes in the galley proof
A New Quartet Tree Heuristic for Hierarchical Clustering
We consider the problem of constructing an an optimal-weight tree from the
3*(n choose 4) weighted quartet topologies on n objects, where optimality means
that the summed weight of the embedded quartet topologiesis optimal (so it can
be the case that the optimal tree embeds all quartets as non-optimal
topologies). We present a heuristic for reconstructing the optimal-weight tree,
and a canonical manner to derive the quartet-topology weights from a given
distance matrix. The method repeatedly transforms a bifurcating tree, with all
objects involved as leaves, achieving a monotonic approximation to the exact
single globally optimal tree. This contrasts to other heuristic search methods
from biological phylogeny, like DNAML or quartet puzzling, which, repeatedly,
incrementally construct a solution from a random order of objects, and
subsequently add agreement values.Comment: 22 pages, 14 figure
A Fast Quartet Tree Heuristic for Hierarchical Clustering
The Minimum Quartet Tree Cost problem is to construct an optimal weight tree
from the weighted quartet topologies on objects, where
optimality means that the summed weight of the embedded quartet topologies is
optimal (so it can be the case that the optimal tree embeds all quartets as
nonoptimal topologies). We present a Monte Carlo heuristic, based on randomized
hill climbing, for approximating the optimal weight tree, given the quartet
topology weights. The method repeatedly transforms a dendrogram, with all
objects involved as leaves, achieving a monotonic approximation to the exact
single globally optimal tree. The problem and the solution heuristic has been
extensively used for general hierarchical clustering of nontree-like
(non-phylogeny) data in various domains and across domains with heterogeneous
data. We also present a greatly improved heuristic, reducing the running time
by a factor of order a thousand to ten thousand. All this is implemented and
available, as part of the CompLearn package. We compare performance and running
time of the original and improved versions with those of UPGMA, BioNJ, and NJ,
as implemented in the SplitsTree package on genomic data for which the latter
are optimized.
Keywords: Data and knowledge visualization, Pattern
matching--Clustering--Algorithms/Similarity measures, Hierarchical clustering,
Global optimization, Quartet tree, Randomized hill-climbing,Comment: LaTeX, 40 pages, 11 figures; this paper has substantial overlap with
arXiv:cs/0606048 in cs.D
Algorithmic Clustering of Music
We present a fully automatic method for music classification, based only on
compression of strings that represent the music pieces. The method uses no
background knowledge about music whatsoever: it is completely general and can,
without change, be used in different areas like linguistic classification and
genomics. It is based on an ideal theory of the information content in
individual objects (Kolmogorov complexity), information distance, and a
universal similarity metric. Experiments show that the method distinguishes
reasonably well between various musical genres and can even cluster pieces by
composer.Comment: 17 pages, 11 figure
Normalized Web Distance and Word Similarity
There is a great deal of work in cognitive psychology, linguistics, and
computer science, about using word (or phrase) frequencies in context in text
corpora to develop measures for word similarity or word association, going back
to at least the 1960s. The goal of this chapter is to introduce the
normalizedis a general way to tap the amorphous low-grade knowledge available
for free on the Internet, typed in by local users aiming at personal
gratification of diverse objectives, and yet globally achieving what is
effectively the largest semantic electronic database in the world. Moreover,
this database is available for all by using any search engine that can return
aggregate page-count estimates for a large range of search-queries. In the
paper introducing the NWD it was called `normalized Google distance (NGD),' but
since Google doesn't allow computer searches anymore, we opt for the more
neutral and descriptive NWD. web distance (NWD) method to determine similarity
between words and phrases. ItComment: Latex, 20 pages, 7 figures, to appear in: Handbook of Natural
Language Processing, Second Edition, Nitin Indurkhya and Fred J. Damerau
Eds., CRC Press, Taylor and Francis Group, Boca Raton, FL, 2010, ISBN
978-142008592
Automatic Meaning Discovery Using Google
We survey a new area of parameter-free similarity distance measures
useful in data-mining,
pattern recognition, learning and automatic semantics extraction.
Given a family of distances on a set of objects,
a distance is universal up to a certain precision for that family if it
minorizes every distance in the family between every two objects
in the set, up to the stated precision (we do not require the universal
distance to be an element of the family).
We consider similarity distances
for two types of objects: literal objects that as such contain all of their
meaning, like genomes or books, and names for objects.
The latter may have
literal embodyments like the first type, but may also
be abstract like ``red\u27\u27 or ``christianity.\u27\u27 For the first type
we consider
a family of computable distance measures
corresponding to parameters expressing similarity according to
particular features
between
pairs of literal objects. For the second type we consider similarity
distances generated by web users corresponding to particular semantic
relations between the (names for) the designated objects.
For both families we give universal similarity
distance measures, incorporating all particular distance measures
in the family. In the first case the universal
distance is based on compression and in the second
case it is based on Google page counts related to search terms.
In both cases experiments on a massive scale give evidence of the
viability of the approaches
On the Complexity of the Single Individual SNP Haplotyping Problem
We present several new results pertaining to haplotyping. These results
concern the combinatorial problem of reconstructing haplotypes from incomplete
and/or imperfectly sequenced haplotype fragments. We consider the complexity of
the problems Minimum Error Correction (MEC) and Longest Haplotype
Reconstruction (LHR) for different restrictions on the input data.
Specifically, we look at the gapless case, where every row of the input
corresponds to a gapless haplotype-fragment, and the 1-gap case, where at most
one gap per fragment is allowed. We prove that MEC is APX-hard in the 1-gap
case and still NP-hard in the gapless case. In addition, we question earlier
claims that MEC is NP-hard even when the input matrix is restricted to being
completely binary. Concerning LHR, we show that this problem is NP-hard and
APX-hard in the 1-gap case (and thus also in the general case), but is
polynomial time solvable in the gapless case.Comment: 26 pages. Related to the WABI2005 paper, "On the Complexity of
Several Haplotyping Problems", but with more/different results. This papers
has just been submitted to the IEEE/ACM Transactions on Computational Biology
and Bioinformatics and we are awaiting a decision on acceptance. It differs
from the mid-August version of this paper because here we prove that 1-gap
LHR is APX-hard. (In the earlier version of the paper we could prove only
that it was NP-hard.