104,665 research outputs found
Complexity in Prefix-Free Regular Languages
We examine deterministic and nondeterministic state complexities of regular
operations on prefix-free languages. We strengthen several results by providing
witness languages over smaller alphabets, usually as small as possible. We next
provide the tight bounds on state complexity of symmetric difference, and
deterministic and nondeterministic state complexity of difference and cyclic
shift of prefix-free languages.Comment: In Proceedings DCFS 2010, arXiv:1008.127
Around Kolmogorov complexity: basic notions and results
Algorithmic information theory studies description complexity and randomness
and is now a well known field of theoretical computer science and mathematical
logic. There are several textbooks and monographs devoted to this theory where
one can find the detailed exposition of many difficult results as well as
historical references. However, it seems that a short survey of its basic
notions and main results relating these notions to each other, is missing.
This report attempts to fill this gap and covers the basic notions of
algorithmic information theory: Kolmogorov complexity (plain, conditional,
prefix), Solomonoff universal a priori probability, notions of randomness
(Martin-L\"of randomness, Mises--Church randomness), effective Hausdorff
dimension. We prove their basic properties (symmetry of information, connection
between a priori probability and prefix complexity, criterion of randomness in
terms of complexity, complexity characterization for effective dimension) and
show some applications (incompressibility method in computational complexity
theory, incompleteness theorems). It is based on the lecture notes of a course
at Uppsala University given by the author
Operations on Automata with All States Final
We study the complexity of basic regular operations on languages represented
by incomplete deterministic or nondeterministic automata, in which all states
are final. Such languages are known to be prefix-closed. We get tight bounds on
both incomplete and nondeterministic state complexity of complement,
intersection, union, concatenation, star, and reversal on prefix-closed
languages.Comment: In Proceedings AFL 2014, arXiv:1405.527
Existential Second-Order Logic Over Graphs: A Complete Complexity-Theoretic Classification
Descriptive complexity theory aims at inferring a problem's computational
complexity from the syntactic complexity of its description. A cornerstone of
this theory is Fagin's Theorem, by which a graph property is expressible in
existential second-order logic (ESO logic) if, and only if, it is in NP. A
natural question, from the theory's point of view, is which syntactic fragments
of ESO logic also still characterize NP. Research on this question has
culminated in a dichotomy result by Gottlob, Kolatis, and Schwentick: for each
possible quantifier prefix of an ESO formula, the resulting prefix class either
contains an NP-complete problem or is contained in P. However, the exact
complexity of the prefix classes inside P remained elusive. In the present
paper, we clear up the picture by showing that for each prefix class of ESO
logic, its reduction closure under first-order reductions is either FO, L, NL,
or NP. For undirected, self-loop-free graphs two containment results are
especially challenging to prove: containment in L for the prefix and containment in FO for the prefix
for monadic . The complex argument by
Gottlob, Kolatis, and Schwentick concerning polynomial time needs to be
carefully reexamined and either combined with the logspace version of
Courcelle's Theorem or directly improved to first-order computations. A
different challenge is posed by formulas with the prefix : We show that they express special constraint satisfaction problems
that lie in L.Comment: Technical report version of a STACS 2015 pape
Relating and contrasting plain and prefix Kolmogorov complexity
In [3] a short proof is given that some strings have maximal plain Kolmogorov
complexity but not maximal prefix-free complexity. The proof uses Levin's
symmetry of information, Levin's formula relating plain and prefix complexity
and Gacs' theorem that complexity of complexity given the string can be high.
We argue that the proof technique and results mentioned above are useful to
simplify existing proofs and to solve open questions.
We present a short proof of Solovay's result [21] relating plain and prefix
complexity: and , (here denotes , etc.).
We show that there exist such that is infinite and is
finite, i.e. the infinitely often C-trivial reals are not the same as the
infinitely often K-trivial reals (i.e. [1,Question 1]).
Solovay showed that for infinitely many we have
and , (here
denotes the length of and , etc.). We show that this
result holds for prefixes of some 2-random sequences.
Finally, we generalize our proof technique and show that no monotone relation
exists between expectation and probability bounded randomness deficiency (i.e.
[6, Question 1]).Comment: 20 pages, 1 figur
- …
