3,371 research outputs found
Artificial Sequences and Complexity Measures
In this paper we exploit concepts of information theory to address the
fundamental problem of identifying and defining the most suitable tools to
extract, in a automatic and agnostic way, information from a generic string of
characters. We introduce in particular a class of methods which use in a
crucial way data compression techniques in order to define a measure of
remoteness and distance between pairs of sequences of characters (e.g. texts)
based on their relative information content. We also discuss in detail how
specific features of data compression techniques could be used to introduce the
notion of dictionary of a given sequence and of Artificial Text and we show how
these new tools can be used for information extraction purposes. We point out
the versatility and generality of our method that applies to any kind of
corpora of character strings independently of the type of coding behind them.
We consider as a case study linguistic motivated problems and we present
results for automatic language recognition, authorship attribution and self
consistent-classification.Comment: Revised version, with major changes, of previous "Data Compression
approach to Information Extraction and Classification" by A. Baronchelli and
V. Loreto. 15 pages; 5 figure
A review of domain adaptation without target labels
Domain adaptation has become a prominent problem setting in machine learning
and related fields. This review asks the question: how can a classifier learn
from a source domain and generalize to a target domain? We present a
categorization of approaches, divided into, what we refer to as, sample-based,
feature-based and inference-based methods. Sample-based methods focus on
weighting individual observations during training based on their importance to
the target domain. Feature-based methods revolve around on mapping, projecting
and representing features such that a source classifier performs well on the
target domain and inference-based methods incorporate adaptation into the
parameter estimation procedure, for instance through constraints on the
optimization procedure. Additionally, we review a number of conditions that
allow for formulating bounds on the cross-domain generalization error. Our
categorization highlights recurring ideas and raises questions important to
further research.Comment: 20 pages, 5 figure
The similarity metric
A new class of distances appropriate for measuring similarity relations
between sequences, say one type of similarity per distance, is studied. We
propose a new ``normalized information distance'', based on the noncomputable
notion of Kolmogorov complexity, and show that it is in this class and it
minorizes every computable distance in the class (that is, it is universal in
that it discovers all computable similarities). We demonstrate that it is a
metric and call it the {\em similarity metric}. This theory forms the
foundation for a new practical tool. To evidence generality and robustness we
give two distinctive applications in widely divergent areas using standard
compression programs like gzip and GenCompress. First, we compare whole
mitochondrial genomes and infer their evolutionary history. This results in a
first completely automatic computed whole mitochondrial phylogeny tree.
Secondly, we fully automatically compute the language tree of 52 different
languages.Comment: 13 pages, LaTex, 5 figures, Part of this work appeared in Proc. 14th
ACM-SIAM Symp. Discrete Algorithms, 2003. This is the final, corrected,
version to appear in IEEE Trans Inform. T
On the accuracy of language trees
Historical linguistics aims at inferring the most likely language
phylogenetic tree starting from information concerning the evolutionary
relatedness of languages. The available information are typically lists of
homologous (lexical, phonological, syntactic) features or characters for many
different languages.
From this perspective the reconstruction of language trees is an example of
inverse problems: starting from present, incomplete and often noisy,
information, one aims at inferring the most likely past evolutionary history. A
fundamental issue in inverse problems is the evaluation of the inference made.
A standard way of dealing with this question is to generate data with
artificial models in order to have full access to the evolutionary process one
is going to infer. This procedure presents an intrinsic limitation: when
dealing with real data sets, one typically does not know which model of
evolution is the most suitable for them. A possible way out is to compare
algorithmic inference with expert classifications. This is the point of view we
take here by conducting a thorough survey of the accuracy of reconstruction
methods as compared with the Ethnologue expert classifications. We focus in
particular on state-of-the-art distance-based methods for phylogeny
reconstruction using worldwide linguistic databases.
In order to assess the accuracy of the inferred trees we introduce and
characterize two generalizations of standard definitions of distances between
trees. Based on these scores we quantify the relative performances of the
distance-based algorithms considered. Further we quantify how the completeness
and the coverage of the available databases affect the accuracy of the
reconstruction. Finally we draw some conclusions about where the accuracy of
the reconstructions in historical linguistics stands and about the leading
directions to improve it.Comment: 36 pages, 14 figure
- …