3,549 research outputs found
The Complexity of Surjective Homomorphism Problems -- a Survey
We survey known results about the complexity of surjective homomorphism
problems, studied in the context of related problems in the literature such as
list homomorphism, retraction and compaction. In comparison with these
problems, surjective homomorphism problems seem to be harder to classify and we
examine especially three concrete problems that have arisen from the
literature, two of which remain of open complexity
Network-Based Vertex Dissolution
We introduce a graph-theoretic vertex dissolution model that applies to a
number of redistribution scenarios such as gerrymandering in political
districting or work balancing in an online situation. The central aspect of our
model is the deletion of certain vertices and the redistribution of their load
to neighboring vertices in a completely balanced way.
We investigate how the underlying graph structure, the knowledge of which
vertices should be deleted, and the relation between old and new vertex loads
influence the computational complexity of the underlying graph problems. Our
results establish a clear borderline between tractable and intractable cases.Comment: Version accepted at SIAM Journal on Discrete Mathematic
Fast counting with tensor networks
We introduce tensor network contraction algorithms for counting satisfying
assignments of constraint satisfaction problems (#CSPs). We represent each
arbitrary #CSP formula as a tensor network, whose full contraction yields the
number of satisfying assignments of that formula, and use graph theoretical
methods to determine favorable orders of contraction. We employ our heuristics
for the solution of #P-hard counting boolean satisfiability (#SAT) problems,
namely monotone #1-in-3SAT and #Cubic-Vertex-Cover, and find that they
outperform state-of-the-art solvers by a significant margin.Comment: v2: added results for monotone #1-in-3SAT; published versio
Evaluation and Enumeration Problems for Regular Path Queries
Regular path queries (RPQs) are a central component of graph databases. We investigate decision- and enumeration problems concerning the evaluation of RPQs under several semantics that have recently been considered: arbitrary paths, shortest paths, and simple paths. Whereas arbitrary and shortest paths can be enumerated in polynomial delay, the situation is much more intricate for simple paths. For instance, already the question if a given graph contains a simple path of a certain length has cases with highly non-trivial solutions and cases that are long-standing open problems. We study RPQ evaluation for simple paths from a parameterized complexity perspective and define a class of simple transitive expressions that is prominent in practice and for which we can prove a dichotomy for the evaluation problem. We observe that, even though simple path semantics is intractable for RPQs in general, it is feasible for the vast majority of RPQs that are used in practice. At the heart of our study on simple paths is a result of independent interest: the two disjoint paths problem in directed graphs is W[1]-hard if parameterized by the length of one of the two paths
Why Philosophers Should Care About Computational Complexity
One might think that, once we know something is computable, how efficiently
it can be computed is a practical question with little further philosophical
importance. In this essay, I offer a detailed case that one would be wrong. In
particular, I argue that computational complexity theory---the field that
studies the resources (such as time, space, and randomness) needed to solve
computational problems---leads to new perspectives on the nature of
mathematical knowledge, the strong AI debate, computationalism, the problem of
logical omniscience, Hume's problem of induction, Goodman's grue riddle, the
foundations of quantum mechanics, economic rationality, closed timelike curves,
and several other topics of philosophical interest. I end by discussing aspects
of complexity theory itself that could benefit from philosophical analysis.Comment: 58 pages, to appear in "Computability: G\"odel, Turing, Church, and
beyond," MIT Press, 2012. Some minor clarifications and corrections; new
references adde
Bicriteria data compression
The advent of massive datasets (and the consequent design of high-performing
distributed storage systems) have reignited the interest of the scientific and
engineering community towards the design of lossless data compressors which
achieve effective compression ratio and very efficient decompression speed.
Lempel-Ziv's LZ77 algorithm is the de facto choice in this scenario because of
its decompression speed and its flexibility in trading decompression speed
versus compressed-space efficiency. Each of the existing implementations offers
a trade-off between space occupancy and decompression speed, so software
engineers have to content themselves by picking the one which comes closer to
the requirements of the application in their hands. Starting from these
premises, and for the first time in the literature, we address in this paper
the problem of trading optimally, and in a principled way, the consumption of
these two resources by introducing the Bicriteria LZ77-Parsing problem, which
formalizes in a principled way what data-compressors have traditionally
approached by means of heuristics. The goal is to determine an LZ77 parsing
which minimizes the space occupancy in bits of the compressed file, provided
that the decompression time is bounded by a fixed amount (or vice-versa). This
way, the software engineer can set its space (or time) requirements and then
derive the LZ77 parsing which optimizes the decompression speed (or the space
occupancy, respectively). We solve this problem efficiently in O(n log^2 n)
time and optimal linear space within a small, additive approximation, by
proving and deploying some specific structural properties of the weighted graph
derived from the possible LZ77-parsings of the input file. The preliminary set
of experiments shows that our novel proposal dominates all the highly
engineered competitors, hence offering a win-win situation in theory&practice
Relating the Time Complexity of Optimization Problems in Light of the Exponential-Time Hypothesis
Obtaining lower bounds for NP-hard problems has for a long time been an
active area of research. Recent algebraic techniques introduced by Jonsson et
al. (SODA 2013) show that the time complexity of the parameterized SAT()
problem correlates to the lattice of strong partial clones. With this ordering
they isolated a relation such that SAT() can be solved at least as fast
as any other NP-hard SAT() problem. In this paper we extend this method
and show that such languages also exist for the max ones problem
(MaxOnes()) and the Boolean valued constraint satisfaction problem over
finite-valued constraint languages (VCSP()). With the help of these
languages we relate MaxOnes and VCSP to the exponential time hypothesis in
several different ways.Comment: This is an extended version of Relating the Time Complexity of
Optimization Problems in Light of the Exponential-Time Hypothesis, appearing
in Proceedings of the 39th International Symposium on Mathematical
Foundations of Computer Science MFCS 2014 Budapest, August 25-29, 201
- ā¦