255,943 research outputs found

    Many-one reductions and the category of multivalued functions

    Get PDF
    Multi-valued functions are common in computable analysis (built upon the Type 2 Theory of Effectivity), and have made an appearance in complexity theory under the moniker search problems leading to complexity classes such as PPAD and PLS being studied. However, a systematic investigation of the resulting degree structures has only been initiated in the former situation so far (the Weihrauch-degrees). A more general understanding is possible, if the category-theoretic properties of multi-valued functions are taken into account. In the present paper, the category-theoretic framework is established, and it is demonstrated that many-one degrees of multi-valued functions form a distributive lattice under very general conditions, regardless of the actual reducibility notions used (e.g. Cook, Karp, Weihrauch). Beyond this, an abundance of open questions arises. Some classic results for reductions between functions carry over to multi-valued functions, but others do not. The basic theme here again depends on category-theoretic differences between functions and multi-valued functions.Comment: an earlier version was titled "Many-one reductions between search problems". in Mathematical Structures in Computer Science, 201

    Search-to-Decision Reductions for Lattice Problems with Approximation Factors (Slightly) Greater Than One

    Get PDF
    We show the first dimension-preserving search-to-decision reductions for approximate SVP and CVP. In particular, for any γ1+O(logn/n)\gamma \leq 1 + O(\log n/n), we obtain an efficient dimension-preserving reduction from γO(n/logn)\gamma^{O(n/\log n)}-SVP to γ\gamma-GapSVP and an efficient dimension-preserving reduction from γO(n)\gamma^{O(n)}-CVP to γ\gamma-GapCVP. These results generalize the known equivalences of the search and decision versions of these problems in the exact case when γ=1\gamma = 1. For SVP, we actually obtain something slightly stronger than a search-to-decision reduction---we reduce γO(n/logn)\gamma^{O(n/\log n)}-SVP to γ\gamma-unique SVP, a potentially easier problem than γ\gamma-GapSVP.Comment: Updated to acknowledge additional prior wor

    The 2CNF Boolean Formula Satisfiability Problem and the Linear Space Hypothesis

    Full text link
    We aim at investigating the solvability/insolvability of nondeterministic logarithmic-space (NL) decision, search, and optimization problems parameterized by size parameters using simultaneously polynomial time and sub-linear space on multi-tape deterministic Turing machines. We are particularly focused on a special NL-complete problem, 2SAT---the 2CNF Boolean formula satisfiability problem---parameterized by the number of Boolean variables. It is shown that 2SAT with nn variables and mm clauses can be solved simultaneously polynomial time and (n/2clogn)polylog(m+n)(n/2^{c\sqrt{\log{n}}})\, polylog(m+n) space for an absolute constant c>0c>0. This fact inspires us to propose a new, practical working hypothesis, called the linear space hypothesis (LSH), which states that 2SAT3_3---a restricted variant of 2SAT in which each variable of a given 2CNF formula appears at most 3 times in the form of literals---cannot be solved simultaneously in polynomial time using strictly "sub-linear" (i.e., m(x)εpolylog(x)m(x)^{\varepsilon}\, polylog(|x|) for a certain constant ε(0,1)\varepsilon\in(0,1)) space on all instances xx. An immediate consequence of this working hypothesis is LNL\mathrm{L}\neq\mathrm{NL}. Moreover, we use our hypothesis as a plausible basis to lead to the insolvability of various NL search problems as well as the nonapproximability of NL optimization problems. For our investigation, since standard logarithmic-space reductions may no longer preserve polynomial-time sub-linear-space complexity, we need to introduce a new, practical notion of "short reduction." It turns out that, parameterized with the number of variables, 2SAT3\overline{\mathrm{2SAT}_3} is complete for a syntactically restricted version of NL, called Syntactic NLω_{\omega}, under such short reductions. This fact supports the legitimacy of our working hypothesis.Comment: (A4, 10pt, 25 pages) This current article extends and corrects its preliminary report in the Proc. of the 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017), August 21-25, 2017, Aalborg, Denmark, Leibniz International Proceedings in Informatics (LIPIcs), Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik 2017, vol. 83, pp. 62:1-62:14, 201

    Scalable Kernelization for Maximum Independent Sets

    Get PDF
    The most efficient algorithms for finding maximum independent sets in both theory and practice use reduction rules to obtain a much smaller problem instance called a kernel. The kernel can then be solved quickly using exact or heuristic algorithms---or by repeatedly kernelizing recursively in the branch-and-reduce paradigm. It is of critical importance for these algorithms that kernelization is fast and returns a small kernel. Current algorithms are either slow but produce a small kernel, or fast and give a large kernel. We attempt to accomplish both of these goals simultaneously, by giving an efficient parallel kernelization algorithm based on graph partitioning and parallel bipartite maximum matching. We combine our parallelization techniques with two techniques to accelerate kernelization further: dependency checking that prunes reductions that cannot be applied, and reduction tracking that allows us to stop kernelization when reductions become less fruitful. Our algorithm produces kernels that are orders of magnitude smaller than the fastest kernelization methods, while having a similar execution time. Furthermore, our algorithm is able to compute kernels with size comparable to the smallest known kernels, but up to two orders of magnitude faster than previously possible. Finally, we show that our kernelization algorithm can be used to accelerate existing state-of-the-art heuristic algorithms, allowing us to find larger independent sets faster on large real-world networks and synthetic instances.Comment: Extended versio

    Average-Case Complexity

    Full text link
    We survey the average-case complexity of problems in NP. We discuss various notions of good-on-average algorithms, and present completeness results due to Impagliazzo and Levin. Such completeness results establish the fact that if a certain specific (but somewhat artificial) NP problem is easy-on-average with respect to the uniform distribution, then all problems in NP are easy-on-average with respect to all samplable distributions. Applying the theory to natural distributional problems remain an outstanding open question. We review some natural distributional problems whose average-case complexity is of particular interest and that do not yet fit into this theory. A major open question whether the existence of hard-on-average problems in NP can be based on the P\neqNP assumption or on related worst-case assumptions. We review negative results showing that certain proof techniques cannot prove such a result. While the relation between worst-case and average-case complexity for general NP problems remains open, there has been progress in understanding the relation between different ``degrees'' of average-case complexity. We discuss some of these ``hardness amplification'' results

    Making Queries Tractable on Big Data with Preprocessing

    Get PDF
    A query class is traditionally considered tractable if there exists a polynomial-time (PTIME) algorithm to answer its queries. When it comes to big data, however, PTIME al-gorithms often become infeasible in practice. A traditional and effective approach to coping with this is to preprocess data off-line, so that queries in the class can be subsequently evaluated on the data efficiently. This paper aims to pro-vide a formal foundation for this approach in terms of com-putational complexity. (1) We propose a set of Π-tractable queries, denoted by ΠT0Q, to characterize classes of queries that can be answered in parallel poly-logarithmic time (NC) after PTIME preprocessing. (2) We show that several natu-ral query classes are Π-tractable and are feasible on big data. (3) We also study a set ΠTQ of query classes that can be ef-fectively converted to Π-tractable queries by re-factorizing its data and queries for preprocessing. We introduce a form of NC reductions to characterize such conversions. (4) We show that a natural query class is complete for ΠTQ. (5) We also show that ΠT0Q ⊂ P unless P = NC, i.e., the set ΠT0Q of all Π-tractable queries is properly contained in the set P of all PTIME queries. Nonetheless, ΠTQ = P, i.e., all PTIME query classes can be made Π-tractable via proper re-factorizations. This work is a step towards understanding the tractability of queries in the context of big data. 1

    Breaking Instance-Independent Symmetries In Exact Graph Coloring

    Full text link
    Code optimization and high level synthesis can be posed as constraint satisfaction and optimization problems, such as graph coloring used in register allocation. Graph coloring is also used to model more traditional CSPs relevant to AI, such as planning, time-tabling and scheduling. Provably optimal solutions may be desirable for commercial and defense applications. Additionally, for applications such as register allocation and code optimization, naturally-occurring instances of graph coloring are often small and can be solved optimally. A recent wave of improvements in algorithms for Boolean satisfiability (SAT) and 0-1 Integer Linear Programming (ILP) suggests generic problem-reduction methods, rather than problem-specific heuristics, because (1) heuristics may be upset by new constraints, (2) heuristics tend to ignore structure, and (3) many relevant problems are provably inapproximable. Problem reductions often lead to highly symmetric SAT instances, and symmetries are known to slow down SAT solvers. In this work, we compare several avenues for symmetry breaking, in particular when certain kinds of symmetry are present in all generated instances. Our focus on reducing CSPs to SAT allows us to leverage recent dramatic improvement in SAT solvers and automatically benefit from future progress. We can use a variety of black-box SAT solvers without modifying their source code because our symmetry-breaking techniques are static, i.e., we detect symmetries and add symmetry breaking predicates (SBPs) during pre-processing. An important result of our work is that among the types of instance-independent SBPs we studied and their combinations, the simplest and least complete constructions are the most effective. Our experiments also clearly indicate that instance-independent symmetries should mostly be processed together with instance-specific symmetries rather than at the specification level, contrary to what has been suggested in the literature
    corecore