6,869 research outputs found

    Structure and Problem Hardness: Goal Asymmetry and DPLL Proofs in<br> SAT-Based Planning

    Full text link
    In Verification and in (optimal) AI Planning, a successful method is to formulate the application as boolean satisfiability (SAT), and solve it with state-of-the-art DPLL-based procedures. There is a lack of understanding of why this works so well. Focussing on the Planning context, we identify a form of problem structure concerned with the symmetrical or asymmetrical nature of the cost of achieving the individual planning goals. We quantify this sort of structure with a simple numeric parameter called AsymRatio, ranging between 0 and 1. We run experiments in 10 benchmark domains from the International Planning Competitions since 2000; we show that AsymRatio is a good indicator of SAT solver performance in 8 of these domains. We then examine carefully crafted synthetic planning domains that allow control of the amount of structure, and that are clean enough for a rigorous analysis of the combinatorial search space. The domains are parameterized by size, and by the amount of structure. The CNFs we examine are unsatisfiable, encoding one planning step less than the length of the optimal plan. We prove upper and lower bounds on the size of the best possible DPLL refutations, under different settings of the amount of structure, as a function of size. We also identify the best possible sets of branching variables (backdoors). With minimum AsymRatio, we prove exponential lower bounds, and identify minimal backdoors of size linear in the number of variables. With maximum AsymRatio, we identify logarithmic DPLL refutations (and backdoors), showing a doubly exponential gap between the two structural extreme cases. The reasons for this behavior -- the proof arguments -- illuminate the prototypical patterns of structure causing the empirical behavior observed in the competition benchmarks

    On the proof complexity of Paris-harrington and off-diagonal ramsey tautologies

    Get PDF
    We study the proof complexity of Paris-Harrington’s Large Ramsey Theorem for bi-colorings of graphs and of off-diagonal Ramsey’s Theorem. For Paris-Harrington, we prove a non-trivial conditional lower bound in Resolution and a non-trivial upper bound in bounded-depth Frege. The lower bound is conditional on a (very reasonable) hardness assumption for a weak (quasi-polynomial) Pigeonhole principle in RES(2). We show that under such an assumption, there is no refutation of the Paris-Harrington formulas of size quasipolynomial in the number of propositional variables. The proof technique for the lower bound extends the idea of using a combinatorial principle to blow up a counterexample for another combinatorial principle beyond the threshold of inconsistency. A strong link with the proof complexity of an unbalanced off-diagonal Ramsey principle is established. This is obtained by adapting some constructions due to Erdos and Mills. ˝ We prove a non-trivial Resolution lower bound for a family of such off-diagonal Ramsey principles

    A Quantitative Study of Pure Parallel Processes

    Full text link
    In this paper, we study the interleaving -- or pure merge -- operator that most often characterizes parallelism in concurrency theory. This operator is a principal cause of the so-called combinatorial explosion that makes very hard - at least from the point of view of computational complexity - the analysis of process behaviours e.g. by model-checking. The originality of our approach is to study this combinatorial explosion phenomenon on average, relying on advanced analytic combinatorics techniques. We study various measures that contribute to a better understanding of the process behaviours represented as plane rooted trees: the number of runs (corresponding to the width of the trees), the expected total size of the trees as well as their overall shape. Two practical outcomes of our quantitative study are also presented: (1) a linear-time algorithm to compute the probability of a concurrent run prefix, and (2) an efficient algorithm for uniform random sampling of concurrent runs. These provide interesting responses to the combinatorial explosion problem

    On paths-based criteria for polynomial time complexity in proof-nets

    Get PDF
    Girard's Light linear logic (LLL) characterized polynomial time in the proof-as-program paradigm with a bound on cut elimination. This logic relied on a stratification principle and a "one-door" principle which were generalized later respectively in the systems L^4 and L^3a. Each system was brought with its own complex proof of Ptime soundness. In this paper we propose a broad sufficient criterion for Ptime soundness for linear logic subsystems, based on the study of paths inside the proof-nets, which factorizes proofs of soundness of existing systems and may be used for future systems. As an additional gain, our bound stands for any reduction strategy whereas most bounds in the literature only stand for a particular strategy.Comment: Long version of a conference pape

    Exponential Time Complexity of the Permanent and the Tutte Polynomial

    Get PDF
    We show conditional lower bounds for well-studied #P-hard problems: (a) The number of satisfying assignments of a 2-CNF formula with n variables cannot be counted in time exp(o(n)), and the same is true for computing the number of all independent sets in an n-vertex graph. (b) The permanent of an n x n matrix with entries 0 and 1 cannot be computed in time exp(o(n)). (c) The Tutte polynomial of an n-vertex multigraph cannot be computed in time exp(o(n)) at most evaluation points (x,y) in the case of multigraphs, and it cannot be computed in time exp(o(n/polylog n)) in the case of simple graphs. Our lower bounds are relative to (variants of) the Exponential Time Hypothesis (ETH), which says that the satisfiability of n-variable 3-CNF formulas cannot be decided in time exp(o(n)). We relax this hypothesis by introducing its counting version #ETH, namely that the satisfying assignments cannot be counted in time exp(o(n)). In order to use #ETH for our lower bounds, we transfer the sparsification lemma for d-CNF formulas to the counting setting
    • …
    corecore