287 research outputs found
The Dimensions of Individual Strings and Sequences
A constructive version of Hausdorff dimension is developed using constructive
supergales, which are betting strategies that generalize the constructive
supermartingales used in the theory of individual random sequences. This
constructive dimension is used to assign every individual (infinite, binary)
sequence S a dimension, which is a real number dim(S) in the interval [0,1].
Sequences that are random (in the sense of Martin-Lof) have dimension 1, while
sequences that are decidable, \Sigma^0_1, or \Pi^0_1 have dimension 0. It is
shown that for every \Delta^0_2-computable real number \alpha in [0,1] there is
a \Delta^0_2 sequence S such that \dim(S) = \alpha.
A discrete version of constructive dimension is also developed using
termgales, which are supergale-like functions that bet on the terminations of
(finite, binary) strings as well as on their successive bits. This discrete
dimension is used to assign each individual string w a dimension, which is a
nonnegative real number dim(w). The dimension of a sequence is shown to be the
limit infimum of the dimensions of its prefixes.
The Kolmogorov complexity of a string is proven to be the product of its
length and its dimension. This gives a new characterization of algorithmic
information and a new proof of Mayordomo's recent theorem stating that the
dimension of a sequence is the limit infimum of the average Kolmogorov
complexity of its first n bits.
Every sequence that is random relative to any computable sequence of
coin-toss biases that converge to a real number \beta in (0,1) is shown to have
dimension \H(\beta), the binary entropy of \beta.Comment: 31 page
Conformal dimension and random groups
We give a lower and an upper bound for the conformal dimension of the
boundaries of certain small cancellation groups. We apply these bounds to the
few relator and density models for random groups. This gives generic bounds of
the following form, where is the relator length, going to infinity.
(a) 1 + 1/C < \Cdim(\bdry G) < C l / \log(l), for the few relator model,
and
(b) 1 + l / (C\log(l)) < \Cdim(\bdry G) < C l, for the density model, at
densities .
In particular, for the density model at densities , as the relator
length goes to infinity, the random groups will pass through infinitely
many different quasi-isometry classes.Comment: 32 pages, 4 figures. v2: Final version. Main result improved to
density < 1/16. Many minor improvements. To appear in GAF
On zeros of Martin-L\"of random Brownian motion
We investigate the sample path properties of Martin-L\"of random Brownian
motion. We show (1) that many classical results which are known to hold almost
surely hold for every Martin-L\"of random Brownian path, (2) that the effective
dimension of zeroes of a Martin-L\"of random Brownian path must be at least
1/2, and conversely that every real with effective dimension greater than 1/2
must be a zero of some Martin-L\"of random Brownian path, and (3) we will
demonstrate a new proof that the solution to the Dirichlet problem in the plane
is computable
Rings with Auslander Dualizing Complexes
A ring with an Auslander dualizing complex is a generalization of an
Auslander-Gorenstein ring. We show that many results which hold for
Auslander-Gorenstein rings also hold in the more general setting. On the other
hand we give criteria for existence of Auslander dualizing complexes which show
these occur quite frequently.
The most powerful tool we use is the Local Duality Theorem for connected
graded algebras over a field. Filtrations allow the transfer of results to
non-graded algebras.
We also prove some results of a categorical nature, most notably the
functoriality of rigid dualizing complexes.Comment: 39 pages, AMSLaTex. Final version, to appear in J. Algebra. Corrected
mistake in proof of Thm. 1.13. minor correction
Semantic Sort: A Supervised Approach to Personalized Semantic Relatedness
We propose and study a novel supervised approach to learning statistical
semantic relatedness models from subjectively annotated training examples. The
proposed semantic model consists of parameterized co-occurrence statistics
associated with textual units of a large background knowledge corpus. We
present an efficient algorithm for learning such semantic models from a
training sample of relatedness preferences. Our method is corpus independent
and can essentially rely on any sufficiently large (unstructured) collection of
coherent texts. Moreover, the approach facilitates the fitting of semantic
models for specific users or groups of users. We present the results of
extensive range of experiments from small to large scale, indicating that the
proposed method is effective and competitive with the state-of-the-art.Comment: 37 pages, 8 figures A short version of this paper was already
published at ECML/PKDD 201
- …