12 research outputs found

    Super Strong ETH Is True for PPSZ with Small Resolution Width

    Get PDF
    We construct k-CNFs with m variables on which the strong version of PPSZ k-SAT algorithm, which uses resolution of width bounded by O(√{log log m}), has success probability at most 2^{-(1-(1 + ε)2/k)m} for every ε > 0. Previously such a bound was known only for the weak PPSZ algorithm which exhaustively searches through small subformulas of the CNF to see if any of them forces the value of a given variable, and for strong PPSZ the best known previous upper bound was 2^{-(1-O(log(k)/k))m} (Pudlák et al., ICALP 2017)

    The (Coarse) Fine-Grained Structure of NP-Hard SAT and CSP Problems

    Get PDF
    We study the fine-grained complexity of NP-complete satisfiability (SAT) problems and constraint satisfaction problems (CSPs) in the context of the strong exponential-time hypothesis (SETH), showing non-trivial lower and upper bounds on the running time. Here, by a non-trivial lower bound for a problem SAT(Gamma) (respectively CSP(Gamma)) with constraint language F, we mean a value c(0) &amp;gt; 1 such that the problem cannot be solved in time O(c(n)) for any c &amp;lt; c(0) unless SETH is false, while a non-trivial upper bound is simply an algorithm for the problem running in time O(c(n)) for some c &amp;lt; 2. Such lower bounds have proven extremely elusive, and except for cases where c(0) = 2 effectively no such previous bound was known. We achieve this by employing an algebraic framework, studying constraint languages r in terms of their algebraic properties. We uncover a powerful algebraic framework where a mild restriction on the allowed constraints offers a concise algebraic characterization. On the relational side we restrict ourselves to Boolean languages closed under variable negation and partial assignment, called sign-symmetric languages. On the algebraic side this results in a description via partial operations arising from system of identities, with a close connection to operations resulting in tractable CSPs, such as near unanimity operations and edge operations. Using this connection we construct improved algorithms for several interesting classes of sign-symmetric languages, and prove explicit lower bounds under SETH. Thus, we find the first example of an NP-complete SAT problem with a non-trivial algorithm which also admits a non-trivial lower bound under SETH. This suggests a dichotomy conjecture with a close connection to the CSP dichotomy theorem: an NP-complete SAT problem admits an improved algorithm if and only if it admits a non-trivial partial invariant of the above form.Funding Agencies|Swedish resourch council (VR) [2019-03690]</p

    PPSZ is better than you think

    Full text link
    PPSZ, for long time the fastest known algorithm for kk-SAT, works by going through the variables of the input formula in random order; each variable is then set randomly to 00 or 11, unless the correct value can be inferred by an efficiently implementable rule (like small-width resolution; or being implied by a small set of clauses). We show that PPSZ performs exponentially better than previously known, for all k≥3k \geq 3. For Unique-33-SAT we bound its running time by O(1.306973n)O(1.306973^{n}), which is somewhat better than the algorithm of Hansen, Kaplan, Zamir, and Zwick, which runs in time O(1.306995n)O(1.306995^n). Before that, the best known upper bound for Unique-33-SAT was O(1.3070319n)O(1.3070319^n). All improvements are achieved without changing the original PPSZ. The core idea is to pretend that PPSZ does not process the variables in uniformly random order, but according to a carefully designed distribution. We write "pretend" since this can be done without any actual change to the algorithm

    LIPIcs, Volume 244, ESA 2022, Complete Volume

    Get PDF
    LIPIcs, Volume 244, ESA 2022, Complete Volum

    Combinatorics

    Get PDF
    Combinatorics is a fundamental mathematical discipline which focuses on the study of discrete objects and their properties. The current workshop brought together researchers from diverse fields such as Extremal and Probabilistic Combinatorics, Discrete Geometry, Graph theory, Combiantorial Optimization and Algebraic Combinatorics for a fruitful interaction. New results, methods and developments and future challenges were discussed. This is a report on the meeting containing abstracts of the presentations and a summary of the problem session

    Classical Computation in the Quantum World

    Full text link
    Quantum computation is by far the most powerful computational model allowed by the laws of physics. By carefully manipulating microscopic systems governed by quantum mechanics, one can efficiently solve computational problems that may be classically intractable; on the contrary, such speed-ups are rarely possible without the help of classical computation, since most quantum algorithms heavily rely on subroutines that are purely classical. A better understanding of the relationship between classical and quantum computation is indispensable, in particular in an era where the first quantum device exceeding classical computational power is within reach. In the first part of the thesis, we study some differences between classical and quantum computation. We first show that quantum cryptographic hashing is maximally resilient against classical leakage, a property beyond reach for any classical hash function. Next, we consider the limitation of strong (amplitude-wise) simulation of quantum computation. We prove an unconditional and explicit complexity lower bound for a category of simulations called monotone strong simulation and further prove conditional complexity lower bounds for general strong simulation techniques. Both results indicate that strong simulation is fundamentally unscalable. In the second part of the thesis, we propose classical algorithms that facilitate quantum computing. We propose a new classical algorithm for the synthesis of a quantum algorithm paradigm called quantum signal processing. Empirically, our algorithm demonstrates numerical stability and acceleration of more than one magnitude compared to state-of-the-art algorithms. Finally, we propose a randomized algorithm for transversally switching between arbitrary stabilizer quantum error-correcting codes. It has the property of preserving the code distance and thus might prove useful for designing fault-tolerant code-switching schemes.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/149943/1/cupjinh_1.pd
    corecore