48,111 research outputs found

    Inapproximability of Combinatorial Optimization Problems

    Full text link
    We survey results on the hardness of approximating combinatorial optimization problems

    The 2CNF Boolean Formula Satisfiability Problem and the Linear Space Hypothesis

    Full text link
    We aim at investigating the solvability/insolvability of nondeterministic logarithmic-space (NL) decision, search, and optimization problems parameterized by size parameters using simultaneously polynomial time and sub-linear space on multi-tape deterministic Turing machines. We are particularly focused on a special NL-complete problem, 2SAT---the 2CNF Boolean formula satisfiability problem---parameterized by the number of Boolean variables. It is shown that 2SAT with nn variables and mm clauses can be solved simultaneously polynomial time and (n/2clogn)polylog(m+n)(n/2^{c\sqrt{\log{n}}})\, polylog(m+n) space for an absolute constant c>0c>0. This fact inspires us to propose a new, practical working hypothesis, called the linear space hypothesis (LSH), which states that 2SAT3_3---a restricted variant of 2SAT in which each variable of a given 2CNF formula appears at most 3 times in the form of literals---cannot be solved simultaneously in polynomial time using strictly "sub-linear" (i.e., m(x)εpolylog(x)m(x)^{\varepsilon}\, polylog(|x|) for a certain constant ε(0,1)\varepsilon\in(0,1)) space on all instances xx. An immediate consequence of this working hypothesis is LNL\mathrm{L}\neq\mathrm{NL}. Moreover, we use our hypothesis as a plausible basis to lead to the insolvability of various NL search problems as well as the nonapproximability of NL optimization problems. For our investigation, since standard logarithmic-space reductions may no longer preserve polynomial-time sub-linear-space complexity, we need to introduce a new, practical notion of "short reduction." It turns out that, parameterized with the number of variables, 2SAT3\overline{\mathrm{2SAT}_3} is complete for a syntactically restricted version of NL, called Syntactic NLω_{\omega}, under such short reductions. This fact supports the legitimacy of our working hypothesis.Comment: (A4, 10pt, 25 pages) This current article extends and corrects its preliminary report in the Proc. of the 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017), August 21-25, 2017, Aalborg, Denmark, Leibniz International Proceedings in Informatics (LIPIcs), Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik 2017, vol. 83, pp. 62:1-62:14, 201

    The Sigma-Semantics: A Comprehensive Semantics for Functional Programs

    Get PDF
    A comprehensive semantics for functional programs is presented, which generalizes the well-known call-by-value and call-by-name semantics. By permitting a separate choice between call-by value and call-by-name for every argument position of every function and parameterizing the semantics by this choice we abstract from the parameter-passing mechanism. Thus common and distinguishing features of all instances of the sigma-semantics, especially call-by-value and call-by-name semantics, are highlighted. Furthermore, a property can be validated for all instances of the sigma-semantics by a single proof. This is employed for proving the equivalence of the given denotational (fixed-point based) and two operational (reduction based) definitions of the sigma-semantics. We present and apply means for very simple proofs of equivalence with the denotational sigma-semantics for a large class of reduction-based sigma-semantics. Our basis are simple first-order constructor-based functional programs with patterns

    A New View on Worst-Case to Average-Case Reductions for NP Problems

    Full text link
    We study the result by Bogdanov and Trevisan (FOCS, 2003), who show that under reasonable assumptions, there is no non-adaptive worst-case to average-case reduction that bases the average-case hardness of an NP-problem on the worst-case complexity of an NP-complete problem. We replace the hiding and the heavy samples protocol in [BT03] by employing the histogram verification protocol of Haitner, Mahmoody and Xiao (CCC, 2010), which proves to be very useful in this context. Once the histogram is verified, our hiding protocol is directly public-coin, whereas the intuition behind the original protocol inherently relies on private coins
    corecore