9 research outputs found

    DNF Sparsification and a Faster Deterministic Counting Algorithm

    Full text link
    Given a DNF formula on n variables, the two natural size measures are the number of terms or size s(f), and the maximum width of a term w(f). It is folklore that short DNF formulas can be made narrow. We prove a converse, showing that narrow formulas can be sparsified. More precisely, any width w DNF irrespective of its size can be ϵ\epsilon-approximated by a width ww DNF with at most (wlog(1/ϵ))O(w)(w\log(1/\epsilon))^{O(w)} terms. We combine our sparsification result with the work of Luby and Velikovic to give a faster deterministic algorithm for approximately counting the number of satisfying solutions to a DNF. Given a formula on n variables with poly(n) terms, we give a deterministic nO~(loglog(n))n^{\tilde{O}(\log \log(n))} time algorithm that computes an additive ϵ\epsilon approximation to the fraction of satisfying assignments of f for \epsilon = 1/\poly(\log n). The previous best result due to Luby and Velickovic from nearly two decades ago had a run-time of nexp(O(loglogn))n^{\exp(O(\sqrt{\log \log n}))}.Comment: To appear in the IEEE Conference on Computational Complexity, 201

    Verifying proofs in constant depth

    Get PDF
    In this paper we initiate the study of proof systems where verification of proofs proceeds by NC circuits. We investigate the question which languages admit proof systems in this very restricted model. Formulated alternatively, we ask which languages can be enumerated by NC functions. Our results show that the answer to this problem is not determined by the complexity of the language. On the one hand, we construct NC proof systems for a variety of languages ranging from regular to NP-complete. On the other hand, we show by combinatorial methods that even easy regular languages such as Exact-OR do not admit NC proof systems. We also present a general construction of proof systems for regular languages with strongly connected NFA's

    Better Pseudorandom Generators from Milder Pseudorandom Restrictions

    Full text link
    We present an iterative approach to constructing pseudorandom generators, based on the repeated application of mild pseudorandom restrictions. We use this template to construct pseudorandom generators for combinatorial rectangles and read-once CNFs and a hitting set generator for width-3 branching programs, all of which achieve near-optimal seed-length even in the low-error regime: We get seed-length O(log (n/epsilon)) for error epsilon. Previously, only constructions with seed-length O(\log^{3/2} n) or O(\log^2 n) were known for these classes with polynomially small error. The (pseudo)random restrictions we use are milder than those typically used for proving circuit lower bounds in that we only set a constant fraction of the bits at a time. While such restrictions do not simplify the functions drastically, we show that they can be derandomized using small-bias spaces.Comment: To appear in FOCS 201

    Deterministic search for CNF satisfying assignments in almost polynomial time

    Full text link
    We consider the fundamental derandomization problem of deterministically finding a satisfying assignment to a CNF formula that has many satisfying assignments. We give a deterministic algorithm which, given an nn-variable poly(n)\mathrm{poly}(n)-clause CNF formula FF that has at least ε2n\varepsilon 2^n satisfying assignments, runs in time nO~(loglogn)2 n^{\tilde{O}(\log\log n)^2} for ε1/polylog(n)\varepsilon \ge 1/\mathrm{polylog}(n) and outputs a satisfying assignment of FF. Prior to our work the fastest known algorithm for this problem was simply to enumerate over all seeds of a pseudorandom generator for CNFs; using the best known PRGs for CNFs [DETT10], this takes time nΩ~(logn)n^{\tilde{\Omega}(\log n)} even for constant ε\varepsilon. Our approach is based on a new general framework relating deterministic search and deterministic approximate counting, which we believe may find further applications

    P vs NP: P is Equal to NP: Desired Proof

    Get PDF
    Computations and computational complexity are fundamental for mathematics and all computer science, including web load time, cryptography (cryptocurrency mining), cybersecurity, artificial intelligence, game theory, multimedia processing, computational physics, biology (for instance, in protein structure prediction), chemistry, and the P vs. NP problem that has been singled out as one of the most challenging open problems in computer science and has great importance as this would essentially solve all the algorithmic problems that we have today if the problem is solved, but the existing complexity is deprecated and does not solve complex computations of tasks that appear in the new digital age as efficiently as it needs. Therefore, we need to realize a new complexity to solve these tasks more rapidly and easily. This paper presents proof of the equality of P and NP complexity classes when the NP problem is not harder to compute than to verify in polynomial time if we forget recursion that takes exponential running time and goes to regress only (every problem in NP can be solved in exponential time, and so it is recursive, this is a key concept that exists, but recursion does not solve the NP problems efficiently). The paper’s goal is to prove the existence of an algorithm solving the NP task in polynomial running time. We get the desired reduction of the exponential problem to the polynomial problem that takes O(log n) complexity

    The isomorphism conjecture for constant depth reductions

    Get PDF
    For any class C closed under TC0 reductions, and for any measure u of uniformity containing Dlogtime, it is shown that all sets complete for C under u-uniform AC0 reductions are isomorphic under u-uniform AC0-computable isomorphisms

    NP Complete Problems-A Minimalist Mutatis Mutandis Model- Testament Of The Panoply

    Get PDF
    A concatenation Model for the NP complete problems is given. Stability analysis, Solutional behavior are conducted. Due to space constraints, we do not go in to specification expatiations and enucleation of the diverse subjects and fields that the constituents belong to in the sense of widest commonalty term

    Reducing the complexity of reductions

    No full text
    We build on the recent progress regarding isomorphisms of complete sets that was reported in Agrawal et al. (1998). In that paper, it was shown that all sets that are complete under (non-uniform) AC0 reductions are isomorphic under isomorphisms computable and invertible via (non-uniform) depth-three AC0 circuits. One of the main tools in proving the isomorphism theorem in Agrawal et al. (1998) is a "Gap Theorem", showing that all sets complete under AC0 reductions are in fact already complete under NC0 reductions. The following questions were left open in that paper: ¶1. Does the "gap" between NC0 and AC0 extend further? In particular, is every set complete under polynomial-time reducibility already complete under NC0 reductions? ¶2. Does a uniform version of the isomorphism theorem hold? ¶3. Is depth-three optimal, or are the complete sets isomorphic under isomorphisms computable by depth-two circuits? ¶ We answer all of these questions. In particular, we prove that the Berman-Hartmanis isomorphism conjecture is true for P-uniform AC0 reductions. More precisely, we show that for any class closed under uniform TC0-computable many-one reductions, the following three theorems hold: ¶1. If contains sets that are complete under a notion of reduction at least as strong as Dlogtime-uniform AC0[mod 2] reductions, then there are such sets that are not complete under (even non-uniform) AC0 reductions. ¶2. The sets complete for under P-uniform AC0 reductions are all isomorphic under isomorphisms computable and invertible by P-uniform AC0 circuits of depth-three. ¶3. There are sets complete for under Dlogtime-uniform AC0 reductions that are not isomorphic under any isomorphism computed by (even non-uniform) AC0 circuits of depth two. ¶To prove the second theorem, we show how to derandomize a version of the switching lemma, which may be of independent interest. (We have recently learned that this result is originally due to Ajtai and Wigderson, but it has not been published.
    corecore