139,277 research outputs found

    Improved Lower Bounds for Reachability in Vector Addition Systems

    Get PDF
    We investigate computational complexity of the reachability problem for vector addition systems (or, equivalently, Petri nets), the central algorithmic problem in verification of concurrent systems. Concerning its complexity, after 40 years of stagnation, a non-elementary lower bound has been shown recently: the problem needs a tower of exponentials of time or space, where the height of tower is linear in the input size. We improve on this lower bound, by increasing the height of tower from linear to exponential. As a side-effect, we obtain better lower bounds for vector addition systems of fixed dimension

    Minimizing the stabbing number of matchings, trees, and triangulations

    Full text link
    The (axis-parallel) stabbing number of a given set of line segments is the maximum number of segments that can be intersected by any one (axis-parallel) line. This paper deals with finding perfect matchings, spanning trees, or triangulations of minimum stabbing number for a given set of points. The complexity of these problems has been a long-standing open question; in fact, it is one of the original 30 outstanding open problems in computational geometry on the list by Demaine, Mitchell, and O'Rourke. The answer we provide is negative for a number of minimum stabbing problems by showing them NP-hard by means of a general proof technique. It implies non-trivial lower bounds on the approximability. On the positive side we propose a cut-based integer programming formulation for minimizing the stabbing number of matchings and spanning trees. We obtain lower bounds (in polynomial time) from the corresponding linear programming relaxations, and show that an optimal fractional solution always contains an edge of at least constant weight. This result constitutes a crucial step towards a constant-factor approximation via an iterated rounding scheme. In computational experiments we demonstrate that our approach allows for actually solving problems with up to several hundred points optimally or near-optimally.Comment: 25 pages, 12 figures, Latex. To appear in "Discrete and Computational Geometry". Previous version (extended abstract) appears in SODA 2004, pp. 430-43

    Hardness magnification for natural problems

    Get PDF
    We show that for several natural problems of interest, complexity lower bounds that are barely non-trivial imply super-polynomial or even exponential lower bounds in strong computational models. We term this phenomenon "hardness magnification". Our examples of hardness magnification include: 1. Let MCSP be the decision problem whose YES instances are truth tables of functions with circuit complexity at most s(n). We show that if MCSP[2^√n] cannot be solved on average with zero error by formulas of linear (or even sub-linear) size, then NP does not have polynomial-size formulas. In contrast, Hirahara and Santhanam (2017) recently showed that MCSP[2^√n] cannot be solved in the worst case by formulas of nearly quadratic size. 2. If there is a c > 0 such that for each positive integer d there is an ε > 0 such that the problem of checking if an n-vertex graph in the adjacency matrix representation has a vertex cover of size (log n)^c cannot be solved by depth-d AC^0 circuits of size m^1+ε, where m = Θ(n^2), then NP does not have polynomial-size formulas. 3. Let (α, β)-MCSP[s] be the promise problem whose YES instances are truth tables of functions that are α-approximable by a circuit of size s(n), and whose NO instances are truth tables of functions that are not β-approximable by a circuit of size s(n). We show that for arbitrary 1/2 ≺ β ≺ α ≤ 1, if (α, β)-MCSP[2^√n] cannot be solved by randomized algorithms with random access to the input running in sublinear time, then NP is not contained in BPP. 4. If for each probabilistic quasi-linear time machine M using poly-logarithmic many random bits that is claimed to solve Satisfiability, there is a deterministic polynomial-time machine that on infinitely many input lengths n either identifies a satisfiable instance of bit-length n on which M does not accept with high probability or an unsatisfiable instance of bit-length n on which M does not reject with high probability, then NEXP is not contained in BPP. 5. Given functions s, c N → N where s ≻ c, let MKtP[c, s] be the promise problem whose YES instances are strings of Kt complexity at most c(N) and NO instances are strings of Kt complexity greater than s(N). We show that if there is a δ ≻ 0 such that for each ε ≻ 0, MKtP[N^ε, N^ε + 5 log(N)] requires Boolean circuits of size N^1+δ, then EXP is not contained in SIZE (poly). For each of the cases of magnification above, we observe that standard hardness assumptions imply much stronger lower bounds for these problems than we require for magnification. We further explore magnification as an avenue to proving strong lower bounds, and argue that magnification circumvents the "natural proofs" barrier of Razborov and Rudich (1997). Examining some standard proof techniques, we find that they fall just short of proving lower bounds via magnification. As one of our main open problems, we ask whether there are other meta-mathematical barriers to proving lower bounds that rule out approache

    Quantum vs. Classical Read-once Branching Programs

    Full text link
    The paper presents the first nontrivial upper and lower bounds for (non-oblivious) quantum read-once branching programs. It is shown that the computational power of quantum and classical read-once branching programs is incomparable in the following sense: (i) A simple, explicit boolean function on 2n input bits is presented that is computable by error-free quantum read-once branching programs of size O(n^3), while each classical randomized read-once branching program and each quantum OBDD for this function with bounded two-sided error requires size 2^{\Omega(n)}. (ii) Quantum branching programs reading each input variable exactly once are shown to require size 2^{\Omega(n)} for computing the set-disjointness function DISJ_n from communication complexity theory with two-sided error bounded by a constant smaller than 1/2-2\sqrt{3}/7. This function is trivially computable even by deterministic OBDDs of linear size. The technically most involved part is the proof of the lower bound in (ii). For this, a new model of quantum multi-partition communication protocols is introduced and a suitable extension of the information cost technique of Jain, Radhakrishnan, and Sen (2003) to this model is presented.Comment: 35 pages. Lower bound for disjointness: Error in application of info theory corrected and regularity of quantum read-once BPs (each variable at least once) added as additional assumption of the theorem. Some more informal explanations adde

    On Characterizing the Data Movement Complexity of Computational DAGs for Parallel Execution

    Get PDF
    Technology trends are making the cost of data movement increasingly dominant, both in terms of energy and time, over the cost of performing arithmetic operations in computer systems. The fundamental ratio of aggregate data movement bandwidth to the total computational power (also referred to the machine balance parameter) in parallel computer systems is decreasing. It is there- fore of considerable importance to characterize the inherent data movement requirements of parallel algorithms, so that the minimal architectural balance parameters required to support it on future systems can be well understood. In this paper, we develop an extension of the well-known red-blue pebble game to develop lower bounds on the data movement complexity for the parallel execution of computational directed acyclic graphs (CDAGs) on parallel systems. We model multi-node multi-core parallel systems, with the total physical memory distributed across the nodes (that are connected through some interconnection network) and in a multi-level shared cache hierarchy for processors within a node. We also develop new techniques for lower bound characterization of non-homogeneous CDAGs. We demonstrate the use of the methodology by analyzing the CDAGs of several numerical algorithms, to develop lower bounds on data movement for their parallel execution
    • …
    corecore