1,749 research outputs found
Constructive Dimension and Turing Degrees
This paper examines the constructive Hausdorff and packing dimensions of
Turing degrees. The main result is that every infinite sequence S with
constructive Hausdorff dimension dim_H(S) and constructive packing dimension
dim_P(S) is Turing equivalent to a sequence R with dim_H(R) <= (dim_H(S) /
dim_P(S)) - epsilon, for arbitrary epsilon > 0. Furthermore, if dim_P(S) > 0,
then dim_P(R) >= 1 - epsilon. The reduction thus serves as a *randomness
extractor* that increases the algorithmic randomness of S, as measured by
constructive dimension.
A number of applications of this result shed new light on the constructive
dimensions of Turing degrees. A lower bound of dim_H(S) / dim_P(S) is shown to
hold for the Turing degree of any sequence S. A new proof is given of a
previously-known zero-one law for the constructive packing dimension of Turing
degrees. It is also shown that, for any regular sequence S (that is, dim_H(S) =
dim_P(S)) such that dim_H(S) > 0, the Turing degree of S has constructive
Hausdorff and packing dimension equal to 1.
Finally, it is shown that no single Turing reduction can be a universal
constructive Hausdorff dimension extractor, and that bounded Turing reductions
cannot extract constructive Hausdorff dimension. We also exhibit sequences on
which weak truth-table and bounded Turing reductions differ in their ability to
extract dimension.Comment: The version of this paper appearing in Theory of Computing Systems,
45(4):740-755, 2009, had an error in the proof of Theorem 2.4, due to
insufficient care with the choice of delta. This version modifies that proof
to fix the error
Bounded time computation on metric spaces and Banach spaces
We extend the framework by Kawamura and Cook for investigating computational
complexity for operators occurring in analysis. This model is based on
second-order complexity theory for functions on the Baire space, which is
lifted to metric spaces by means of representations. Time is measured in terms
of the length of the input encodings and the required output precision. We
propose the notions of a complete representation and of a regular
representation. We show that complete representations ensure that any
computable function has a time bound. Regular representations generalize
Kawamura and Cook's more restrictive notion of a second-order representation,
while still guaranteeing fast computability of the length of the encodings.
Applying these notions, we investigate the relationship between purely metric
properties of a metric space and the existence of a representation such that
the metric is computable within bounded time. We show that a bound on the
running time of the metric can be straightforwardly translated into size bounds
of compact subsets of the metric space. Conversely, for compact spaces and for
Banach spaces we construct a family of admissible, complete, regular
representations that allow for fast computation of the metric and provide short
encodings. Here it is necessary to trade the time bound off against the length
of encodings
Parameterized Uniform Complexity in Numerics: from Smooth to Analytic, from NP-hard to Polytime
The synthesis of classical Computational Complexity Theory with Recursive
Analysis provides a quantitative foundation to reliable numerics. Here the
operators of maximization, integration, and solving ordinary differential
equations are known to map (even high-order differentiable) polynomial-time
computable functions to instances which are `hard' for classical complexity
classes NP, #P, and CH; but, restricted to analytic functions, map
polynomial-time computable ones to polynomial-time computable ones --
non-uniformly!
We investigate the uniform parameterized complexity of the above operators in
the setting of Weihrauch's TTE and its second-order extension due to
Kawamura&Cook (2010). That is, we explore which (both continuous and discrete,
first and second order) information and parameters on some given f is
sufficient to obtain similar data on Max(f) and int(f); and within what running
time, in terms of these parameters and the guaranteed output precision 2^(-n).
It turns out that Gevrey's hierarchy of functions climbing from analytic to
smooth corresponds to the computational complexity of maximization growing from
polytime to NP-hard. Proof techniques involve mainly the Theory of (discrete)
Computation, Hard Analysis, and Information-Based Complexity
Kolmogorov Complexity in perspective. Part II: Classification, Information Processing and Duality
We survey diverse approaches to the notion of information: from Shannon
entropy to Kolmogorov complexity. Two of the main applications of Kolmogorov
complexity are presented: randomness and classification. The survey is divided
in two parts published in a same volume. Part II is dedicated to the relation
between logic and information system, within the scope of Kolmogorov
algorithmic information theory. We present a recent application of Kolmogorov
complexity: classification using compression, an idea with provocative
implementation by authors such as Bennett, Vitanyi and Cilibrasi. This stresses
how Kolmogorov complexity, besides being a foundation to randomness, is also
related to classification. Another approach to classification is also
considered: the so-called "Google classification". It uses another original and
attractive idea which is connected to the classification using compression and
to Kolmogorov complexity from a conceptual point of view. We present and unify
these different approaches to classification in terms of Bottom-Up versus
Top-Down operational modes, of which we point the fundamental principles and
the underlying duality. We look at the way these two dual modes are used in
different approaches to information system, particularly the relational model
for database introduced by Codd in the 70's. This allows to point out diverse
forms of a fundamental duality. These operational modes are also reinterpreted
in the context of the comprehension schema of axiomatic set theory ZF. This
leads us to develop how Kolmogorov's complexity is linked to intensionality,
abstraction, classification and information system.Comment: 43 page
Dimension Extractors and Optimal Decompression
A *dimension extractor* is an algorithm designed to increase the effective
dimension -- i.e., the amount of computational randomness -- of an infinite
binary sequence, in order to turn a "partially random" sequence into a "more
random" sequence. Extractors are exhibited for various effective dimensions,
including constructive, computable, space-bounded, time-bounded, and
finite-state dimension. Using similar techniques, the Kucera-Gacs theorem is
examined from the perspective of decompression, by showing that every infinite
sequence S is Turing reducible to a Martin-Loef random sequence R such that the
asymptotic number of bits of R needed to compute n bits of S, divided by n, is
precisely the constructive dimension of S, which is shown to be the optimal
ratio of query bits to computed bits achievable with Turing reductions. The
extractors and decompressors that are developed lead directly to new
characterizations of some effective dimensions in terms of optimal
decompression by Turing reductions.Comment: This report was combined with a different conference paper "Every
Sequence is Decompressible from a Random One" (cs.IT/0511074, at
http://dx.doi.org/10.1007/11780342_17), and both titles were changed, with
the conference paper incorporated as section 5 of this new combined paper.
The combined paper was accepted to the journal Theory of Computing Systems,
as part of a special issue of invited papers from the second conference on
Computability in Europe, 200
- …