1,440 research outputs found
Characterizing the strongly jump-traceable sets via randomness
We show that if a set is computable from every superlow 1-random set,
then is strongly jump-traceable. This theorem shows that the computably
enumerable (c.e.) strongly jump-traceable sets are exactly the c.e.\ sets
computable from every superlow 1-random set.
We also prove the analogous result for superhighness: a c.e.\ set is strongly
jump-traceable if and only if it is computable from every superhigh 1-random
set.
Finally, we show that for each cost function with the limit condition
there is a 1-random set such that every c.e.\ set
obeys . To do so, we connect cost function strength and the strength of
randomness notions. This result gives a full correspondence between obedience
of cost functions and being computable from 1-random sets.Comment: 41 page
Lowness notions, measure and domination
We show that positive measure domination implies uniform almost everywhere
domination and that this proof translates into a proof in the subsystem
WWKL (but not in RCA) of the equivalence of various Lebesgue measure
regularity statements introduced by Dobrinen and Simpson. This work also allows
us to prove that low for weak -randomness is the same as low for
Martin-L\"of randomness (a result independently obtained by Nies). Using the
same technique, we show that implies , generalizing the
fact that low for Martin-L\"of randomness implies low for
Calibrating the complexity of Delta 2 sets via their changes
The computational complexity of a Delta 2 set will be calibrated by the
amount of changes needed for any of its computable approximations. Firstly, we
study Martin-Loef random sets, where we quantify the changes of initial
segments. Secondly, we look at c.e. sets, where we quantify the overall amount
of changes by obedience to cost functions. Finally, we combine the two
settings. The discussions lead to three basic principles on how complexity and
changes relate
Randomness and lowness notions via open covers
One of the main lines of research in algorithmic randomness is that of
lowness notions. Given a randomness notion R, we ask for which sequences A does
relativization to A leave R unchanged (i.e., R^A = R)? Such sequences are call
low for R. This question extends to a pair of randomness notions R and S, where
S is weaker: for which A is S^A still weaker than R? In the last few years,
many results have characterized the sequences that are low for randomness by
their low computational strength. A few results have also given
measure-theoretic characterizations of low sequences. For example, Kjos-Hanssen
proved that A is low for Martin-L\"of randomness if and only if every A-c.e.
open set of measure less than 1 can be covered by a c.e. open set of measure
less than 1. In this paper, we give a series of results showing that a wide
variety of lowness notions can be expressed in a similar way, i.e., via the
ability to cover open sets of a certain type by open sets of some other type.
This provides a unified framework that clarifies the study of lowness for
randomness notions, and allows us to give simple proofs of a number of known
results. We also use this framework to prove new results, including showing
that the classes Low(MLR;SR) and Low(W2R;SR) coincide, answering a question of
Nies. Other applications include characterizations of highness notions, a
broadly applicable explanation for why low for randomness is the same as low
for tests, and a simple proof that Low(W2R;S)=Low(MLR;S), where S is the class
of Martin-L\"of, computable, or Schnorr random sequences. The final section
gives characterizations of lowness notions using summable functions and
convergent measure machines instead of open covers. We finish with a simple
proof of a result of Nies, that Low(MLR) = Low(MLR; CR).Comment: This is a revised version of the APAL paper. In particular, a full
proof of Proposition 24 is adde
- …