6,727 research outputs found
Strong isomorphism reductions in complexity theory
We give the first systematic study of strong isomorphism reductions, a notion of reduction more appropriate than polynomial time reduction when, for example, comparing the computational complexity of the isomorphim problem for different classes of structures. We show that the partial ordering of its degrees is quite rich. We analyze its relationship to a further type of reduction between classes of structures based on purely comparing for every n the number of nonisomorphic structures of cardinality at most n in both classes. Furthermore, in a more general setting we address the question of the existence of a maximal element in the partial ordering of the degrees
Complexity of equivalence relations and preorders from computability theory
We study the relative complexity of equivalence relations and preorders from
computability theory and complexity theory. Given binary relations , a
componentwise reducibility is defined by R\le S \iff \ex f \, \forall x, y \,
[xRy \lra f(x) Sf(y)]. Here is taken from a suitable class of effective
functions. For us the relations will be on natural numbers, and must be
computable. We show that there is a -complete equivalence relation, but
no -complete for .
We show that preorders arising naturally in the above-mentioned
areas are -complete. This includes polynomial time -reducibility
on exponential time sets, which is , almost inclusion on r.e.\ sets,
which is , and Turing reducibility on r.e.\ sets, which is .Comment: To appear in J. Symb. Logi
Some hard families of parameterised counting problems
We consider parameterised subgraph-counting problems of the following form:
given a graph G, how many k-tuples of its vertices have a given property? A
number of such problems are known to be #W[1]-complete; here we substantially
generalise some of these existing results by proving hardness for two large
families of such problems. We demonstrate that it is #W[1]-hard to count the
number of k-vertex subgraphs having any property where the number of distinct
edge-densities of labelled subgraphs that satisfy the property is o(k^2). In
the special case that the property in question depends only on the number of
edges in the subgraph, we give a strengthening of this result which leads to
our second family of hard problems.Comment: A few more minor changes. This version to appear in the ACM
Transactions on Computation Theor
Complexity of distances: Theory of generalized analytic equivalence relations
We generalize the notion of analytic/Borel equivalence relations, orbit
equivalence relations, and Borel reductions between them to their continuous
and quantitative counterparts: analytic/Borel pseudometrics, orbit
pseudometrics, and Borel reductions between them. We motivate these concepts on
examples and we set some basic general theory. We illustrate the new notion of
reduction by showing that the Gromov-Hausdorff distance maintains the same
complexity if it is defined on the class of all Polish metric spaces, spaces
bounded from below, from above, and from both below and above. Then we show
that is not reducible to equivalences induced by orbit pseudometrics,
generalizing the seminal result of Kechris and Louveau. We answer in negative a
question of Ben-Yaacov, Doucha, Nies, and Tsankov on whether balls in the
Gromov-Hausdorff and Kadets distances are Borel. In appendix, we provide new
methods using games showing that the distance-zero classes in certain
pseudometrics are Borel, extending the results of Ben Yaacov, Doucha, Nies, and
Tsankov.
There is a complementary paper of the authors where reductions between the
most common pseudometrics from functional analysis and metric geometry are
provided.Comment: Based on the feedback we received, we decided to split the original
version into two parts. The new version is now the first part of this spli
An Atypical Survey of Typical-Case Heuristic Algorithms
Heuristic approaches often do so well that they seem to pretty much always
give the right answer. How close can heuristic algorithms get to always giving
the right answer, without inducing seismic complexity-theoretic consequences?
This article first discusses how a series of results by Berman, Buhrman,
Hartmanis, Homer, Longpr\'{e}, Ogiwara, Sch\"{o}ening, and Watanabe, from the
early 1970s through the early 1990s, explicitly or implicitly limited how well
heuristic algorithms can do on NP-hard problems. In particular, many desirable
levels of heuristic success cannot be obtained unless severe, highly unlikely
complexity class collapses occur. Second, we survey work initiated by Goldreich
and Wigderson, who showed how under plausible assumptions deterministic
heuristics for randomized computation can achieve a very high frequency of
correctness. Finally, we consider formal ways in which theory can help explain
the effectiveness of heuristics that solve NP-hard problems in practice.Comment: This article is currently scheduled to appear in the December 2012
issue of SIGACT New
- …