37 research outputs found
An algorithmic complexity interpretation of Lin's third law of information theory
Instead of static entropy we assert that the Kolmogorov complexity of a
static structure such as a solid is the proper measure of disorder (or
chaoticity). A static structure in a surrounding perfectly-random universe acts
as an interfering entity which introduces local disruption in randomness. This
is modeled by a selection rule which selects a subsequence of the random
input sequence that hits the structure. Through the inequality that relates
stochasticity and chaoticity of random binary sequences we maintain that Lin's
notion of stability corresponds to the stability of the frequency of 1s in the
selected subsequence. This explains why more complex static structures are less
stable. Lin's third law is represented as the inevitable change that static
structure undergo towards conforming to the universe's perfect randomness
Van Lambalgen's Theorem for uniformly relative Schnorr and computable randomness
We correct Miyabe's proof of van Lambalgen's Theorem for truth-table Schnorr
randomness (which we will call uniformly relative Schnorr randomness). An
immediate corollary is one direction of van Lambalgen's theorem for Schnorr
randomness. It has been claimed in the literature that this corollary (and the
analogous result for computable randomness) is a "straightforward modification
of the proof of van Lambalgen's Theorem." This is not so, and we point out why.
We also point out an error in Miyabe's proof of van Lambalgen's Theorem for
truth-table reducible randomness (which we will call uniformly relative
computable randomness). While we do not fix the error, we do prove a weaker
version of van Lambalgen's Theorem where each half is computably random
uniformly relative to the other
Kolmogorov Complexity and Solovay Functions
Solovay proved that there exists a computable upper bound f of the
prefix-free Kolmogorov complexity function K such that f (x) = K(x) for
infinitely many x. In this paper, we consider the class of computable functions
f such that K(x) <= f (x)+O(1) for all x and f (x) <= K(x) + O(1) for
infinitely many x, which we call Solovay functions. We show that Solovay
functions present interesting connections with randomness notions such as
Martin-L\"of randomness and K-triviality
Polynomial-clone reducibility
Plan B paper, M.A., Mathematics, University of Hawaii at Manoa, 2010Polynomial-clone reducibilities are generalizations of the truth-table reducibilities. A polynomial clone is a set of functions over a finite set X that is closed under composition and contains all the constant and projection functions. For a fixed polynomial clone C, a sequence B ∈ X ω is C-reducible to A ∈ X ω if there is an algorithm that computes B from A using only effectively selected functions from C. We show that if A is a Kurtz random sequence and C1 C2 are distinct polynomial clones, then there is a sequence B that is C1 -reducible to A but not C2 -reducible to A. This implies a generalization of a result first proved by Lachlan for the case |X| = 2. We also show that the same result holds if Kurtz random is replaced by Kolmogorov-Loveland stochastic
Separations of Non-monotonic Randomness Notions
In the theory of algorithmic randomness, several notions of random sequence are defined via a game-theoretic approach, and the notions that received most attention are perhaps Martin-L"of randomness
and computable randomness. The latter notion was introduced by Schnorr and is rather natural: an infinite binary sequence is computably random if no total computable strategy succeeds on it by betting on bits in order. However, computably random sequences can have properties that one may consider to be incompatible with being random, in particular, there are computably random sequences that are highly compressible. The concept of Martin-L"of randomness is much better behaved in this and other respects, on the other hand its definition in terms of martingales is considerably less natural.
Muchnik, elaborating on ideas of Kolmogorov and Loveland, refined Schnorr\u27s model by also allowing non-monotonic strategies, i.e. strategies that do not bet on bits in order. The subsequent ``non-monotonic\u27\u27 notion of randomness, now called Kolmogorov-Loveland-randomness, has been shown to be quite close to Martin-L"of randomness, but whether these two classes coincide remains a fundamental open question.
In order to get a better understanding of non-monotonic randomness notions, Miller and Nies introduced some interesting intermediate concepts, where one only allows non-adaptive strategies, i.e., strategies that can still bet non-monotonically, but such that the sequence of betting positions is known in advance (and computable). Recently, these notions were shown by Kastermans and Lempp to differ from Martin-L"of randomness. We continue the study of the non-monotonic randomness notions introduced by Miller and Nies and obtain results about the Kolmogorov complexities of initial segments that may and may not occur for such sequences, where these results then imply a complete classification of these randomness notions by order of strength