72,931 research outputs found
Computations with polynomial evaluation oracle: ruling out superlinear SETH-based lower bounds
The field of fine-grained complexity aims at proving conditional lower bounds
on the time complexity of computational problems. One of the most popular
assumptions, Strong Exponential Time Hypothesis (SETH), implies that SAT cannot
be solved in time. In recent years, it has been proved that
known algorithms for many problems are optimal under SETH. Despite the wide
applicability of SETH, for many problems, there are no known SETH-based lower
bounds, so the quest for new reductions continues.
Two barriers for proving SETH-based lower bounds are known. Carmosino et al.
(ITCS 2016) introduced the Nondeterministic Strong Exponential Time Hypothesis
(NSETH) stating that TAUT cannot be solved in time even if
one allows nondeterminism. They used this hypothesis to show that some natural
fine-grained reductions would be difficult to obtain: proving that, say, 3-SUM
requires time under SETH, breaks NSETH and this, in turn,
implies strong circuit lower bounds. Recently, Belova et al. (SODA 2023)
introduced the so-called polynomial formulations to show that for many NP-hard
problems, proving any explicit exponential lower bound under SETH also implies
strong circuit lower bounds.
We prove that for a range of problems from P, including -SUM and triangle
detection, proving superlinear lower bounds under SETH is challenging as it
implies new circuit lower bounds. To this end, we show that these problems can
be solved in nearly linear time with oracle calls to evaluating a polynomial of
constant degree. Then, we introduce a strengthening of SETH stating that
solving SAT in time is difficult even if one has
constant degree polynomial evaluation oracle calls. This hypothesis is stronger
and less believable than SETH, but refuting it is still challenging: we show
that this implies circuit lower bounds
Tight lower bounds for the Workflow Satisfiability Problem based on the Strong Exponential Time Hypothesis
The Workflow Satisfiability Problem (WSP) asks whether there exists an
assignment of authorized users to the steps in a workflow specification,
subject to certain constraints on the assignment. The problem is NP-hard even
when restricted to just not equals constraints. Since the number of steps
is relatively small in practice, Wang and Li (2010) introduced a
parametrisation of WSP by . Wang and Li (2010) showed that, in general, the
WSP is W[1]-hard, i.e., it is unlikely that there exists a fixed-parameter
tractable (FPT) algorithm for solving the WSP. Crampton et al. (2013) and Cohen
et al. (2014) designed FPT algorithms of running time and
for the WSP with so-called regular and user-independent
constraints, respectively. In this note, we show that there are no algorithms
of running time and for the two
restrictions of WSP, respectively, with any , unless the Strong
Exponential Time Hypothesis fails
Why is it hard to beat for Longest Common Weakly Increasing Subsequence?
The Longest Common Weakly Increasing Subsequence problem (LCWIS) is a variant
of the classic Longest Common Subsequence problem (LCS). Both problems can be
solved with simple quadratic time algorithms. A recent line of research led to
a number of matching conditional lower bounds for LCS and other related
problems. However, the status of LCWIS remained open.
In this paper we show that LCWIS cannot be solved in strongly subquadratic
time unless the Strong Exponential Time Hypothesis (SETH) is false.
The ideas which we developed can also be used to obtain a lower bound based
on a safer assumption of NC-SETH, i.e. a version of SETH which talks about NC
circuits instead of less expressive CNF formulas
Modern Lower Bound Techniques in Database Theory and Constraint Satisfaction
Conditional lower bounds based on , the Exponential-Time Hypothesis (ETH), or similar complexity assumptions can provide very useful information about what type of algorithms are likely to be possible. Ideally, such lower bounds would be able to demonstrate that the best known algorithms are essentially optimal and cannot be improved further. In this tutorial, we overview different types of lower bounds, and see how they can be applied to problems in database theory and constraint satisfaction
Refining complexity analyses in planning by exploiting the exponential time hypothesis
The use of computational complexity in planning, and in AI in general, has always been a disputed topic. A major problem with ordinary worst-case analyses is that they do not provide any quantitative information: they do not tell us much about the running time of concrete algorithms, nor do they tell us much about the running time of optimal algorithms. We address problems like this by presenting results based on the exponential time hypothesis (ETH), which is a widely accepted hypothesis concerning the time complexity of 3-SAT. By using this approach, we provide, for instance, almost matching upper and lower bounds onthe time complexity of propositional planning.Funding Agencies|National Graduate School in Computer Science (CUGS), Sweden; Swedish Research Council (VR) [621-2014-4086]</p
Lower Bounds for QBFs of Bounded Treewidth
The problem of deciding the validity (QSAT) of quantified Boolean formulas
(QBF) is a vivid research area in both theory and practice. In the field of
parameterized algorithmics, the well-studied graph measure treewidth turned out
to be a successful parameter. A well-known result by Chen in parameterized
complexity is that QSAT when parameterized by the treewidth of the primal graph
of the input formula together with the quantifier depth of the formula is
fixed-parameter tractable. More precisely, the runtime of such an algorithm is
polynomial in the formula size and exponential in the treewidth, where the
exponential function in the treewidth is a tower, whose height is the
quantifier depth. A natural question is whether one can significantly improve
these results and decrease the tower while assuming the Exponential Time
Hypothesis (ETH). In the last years, there has been a growing interest in the
quest of establishing lower bounds under ETH, showing mostly problem-specific
lower bounds up to the third level of the polynomial hierarchy. Still, an
important question is to settle this as general as possible and to cover the
whole polynomial hierarchy. In this work, we show lower bounds based on the ETH
for arbitrary QBFs parameterized by treewidth (and quantifier depth). More
formally, we establish lower bounds for QSAT and treewidth, namely, that under
ETH there cannot be an algorithm that solves QSAT of quantifier depth i in
runtime significantly better than i-fold exponential in the treewidth and
polynomial in the input size. In doing so, we provide a versatile reduction
technique to compress treewidth that encodes the essence of dynamic programming
on arbitrary tree decompositions. Further, we describe a general methodology
for a more fine-grained analysis of problems parameterized by treewidth that
are at higher levels of the polynomial hierarchy
- …