714 research outputs found

    Negation-Limited Formulas

    Get PDF
    We give an efficient structural decomposition theorem for formulas that depends on their negation complexity and demonstrate its power with the following applications. We prove that every formula that contains t negation gates can be shrunk using a random restriction to a formula of size O(t) with the shrinkage exponent of monotone formulas. As a result, the shrinkage exponent of formulas that contain a constant number of negation gates is equal to the shrinkage exponent of monotone formulas. We give an efficient transformation of formulas with t negation gates to circuits with log(t) negation gates. This transformation provides a generic way to cast results for negation-limited circuits to the setting of negation-limited formulas. For example, using a result of Rossman (CCC\u2715), we obtain an average-case lower bound for formulas of polynomial-size on n variables with n^{1/2-epsilon} negations. In addition, we prove a lower bound on the number of negations required to compute one-way permutations by polynomial-size formulas

    Cubic Formula Size Lower Bounds Based on Compositions with Majority

    Get PDF
    We define new functions based on the Andreev function and prove that they require n^{3}/polylog(n) formula size to compute. The functions we consider are generalizations of the Andreev function using compositions with the majority function. Our arguments apply to composing a hard function with any function that agrees with the majority function (or its negation) on the middle slices of the Boolean cube, as well as iterated compositions of such functions. As a consequence, we obtain n^{3}/polylog(n) lower bounds on the (non-monotone) formula size of an explicit monotone function by combining the monotone address function with the majority function

    Shrinkage of Decision Lists and DNF Formulas

    Get PDF
    We establish nearly tight bounds on the expected shrinkage of decision lists and DNF formulas under the p-random restriction R_p for all values of p ? [0,1]. For a function f with domain {0,1}?, let DL(f) denote the minimum size of a decision list that computes f. We show that E[DL(f ? R_p)] ? DL(f)^log_{2/(1-p)}((1+p)/(1-p)). For example, this bound is ?{DL(f)} when p = ?5-2 ? 0.24. For Boolean functions f, we obtain the same shrinkage bound with respect to DNF formula size plus 1 (i.e., replacing DL(?) with DNF(?)+1 on both sides of the inequality)

    Small Bias Requires Large Formulas

    Get PDF
    A small-biased function is a randomized function whose distribution of truth-tables is small-biased. We demonstrate that known explicit lower bounds on (1) the size of general Boolean formulas, (2) the size of De Morgan formulas, and (3) correlation against small De Morgan formulas apply to small-biased functions. As a consequence, any strongly explicit small-biased generator is subject to the best-known explicit formula lower bounds in all these models. On the other hand, we give a construction of a small-biased function that is tight with respect to lower bound (1) for the relevant range of parameters. We interpret this construction as a natural-type barrier against substantially stronger lower bounds for general formulas

    Algorithms and lower bounds for de Morgan formulas of low-communication leaf gates

    Get PDF
    The class FORMULA[s]GFORMULA[s] \circ \mathcal{G} consists of Boolean functions computable by size-ss de Morgan formulas whose leaves are any Boolean functions from a class G\mathcal{G}. We give lower bounds and (SAT, Learning, and PRG) algorithms for FORMULA[n1.99]GFORMULA[n^{1.99}]\circ \mathcal{G}, for classes G\mathcal{G} of functions with low communication complexity. Let R(k)(G)R^{(k)}(\mathcal{G}) be the maximum kk-party NOF randomized communication complexity of G\mathcal{G}. We show: (1) The Generalized Inner Product function GIPnkGIP^k_n cannot be computed in FORMULA[s]GFORMULA[s]\circ \mathcal{G} on more than 1/2+ε1/2+\varepsilon fraction of inputs for s=o ⁣(n2(k4kR(k)(G)log(n/ε)log(1/ε))2). s = o \! \left ( \frac{n^2}{ \left(k \cdot 4^k \cdot {R}^{(k)}(\mathcal{G}) \cdot \log (n/\varepsilon) \cdot \log(1/\varepsilon) \right)^{2}} \right). As a corollary, we get an average-case lower bound for GIPnkGIP^k_n against FORMULA[n1.99]PTFk1FORMULA[n^{1.99}]\circ PTF^{k-1}. (2) There is a PRG of seed length n/2+O(sR(2)(G)log(s/ε)log(1/ε))n/2 + O\left(\sqrt{s} \cdot R^{(2)}(\mathcal{G}) \cdot\log(s/\varepsilon) \cdot \log (1/\varepsilon) \right) that ε\varepsilon-fools FORMULA[s]GFORMULA[s] \circ \mathcal{G}. For FORMULA[s]LTFFORMULA[s] \circ LTF, we get the better seed length O(n1/2s1/4log(n)log(n/ε))O\left(n^{1/2}\cdot s^{1/4}\cdot \log(n)\cdot \log(n/\varepsilon)\right). This gives the first non-trivial PRG (with seed length o(n)o(n)) for intersections of nn half-spaces in the regime where ε1/n\varepsilon \leq 1/n. (3) There is a randomized 2nt2^{n-t}-time #\#SAT algorithm for FORMULA[s]GFORMULA[s] \circ \mathcal{G}, where t=Ω(nslog2(s)R(2)(G))1/2.t=\Omega\left(\frac{n}{\sqrt{s}\cdot\log^2(s)\cdot R^{(2)}(\mathcal{G})}\right)^{1/2}. In particular, this implies a nontrivial #SAT algorithm for FORMULA[n1.99]LTFFORMULA[n^{1.99}]\circ LTF. (4) The Minimum Circuit Size Problem is not in FORMULA[n1.99]XORFORMULA[n^{1.99}]\circ XOR. On the algorithmic side, we show that FORMULA[n1.99]XORFORMULA[n^{1.99}] \circ XOR can be PAC-learned in time 2O(n/logn)2^{O(n/\log n)}

    Improved Exact Algorithms for Mildly Sparse Instances of Max SAT

    Get PDF
    We present improved exponential time exact algorithms for Max SAT. Our algorithms run in time of the form O(2^{(1-mu(c))n}) for instances with n variables and m=cn clauses. In this setting, there are three incomparable currently best algorithms: a deterministic exponential space algorithm with mu(c)=1/O(c * log(c)) due to Dantsin and Wolpert [SAT 2006], a randomized polynomial space algorithm with mu(c)=1/O(c * log^3(c)) and a deterministic polynomial space algorithm with mu(c)=1/O(c^2 * log^2(c)) due to Sakai, Seto and Tamaki [Theory Comput. Syst., 2015]. Our first result is a deterministic polynomial space algorithm with mu(c)=1/O(c * log(c)) that achieves the previous best time complexity without exponential space or randomization. Furthermore, this algorithm can handle instances with exponentially large weights and hard constraints. The previous algorithms and our deterministic polynomial space algorithm run super-polynomially faster than 2^n only if m=O(n^2). Our second results are deterministic exponential space algorithms for Max SAT with mu(c)=1/O((c * log(c))^{2/3}) and for Max 3-SAT with mu(c)=1/O(c^{1/2}) that run super-polynomially faster than 2^n when m=o(n^{5/2}/log^{5/2}(n)) and m=o(n^3/log^2(n)) respectively

    Pseudorandomness from Shrinkage

    Full text link
    One powerful theme in complexity theory and pseudorandomness in the past few decades has been the use lower bounds to give pseudorandom generators (PRGs). However, the general results using this hardness vs. randomness paradigm suffer a quantitative loss in parameters, and hence do not give nontrivial implications for models where we don’t know super-polynomial lower bounds but do know lower bounds of a fixed polynomial. We show that when such lower bounds are proved using random restrictions, we can construct PRGs which are essentially best possible without in turn improving the lower bounds. More specifically, say that a circuit family has shrinkage exponent Γ if a random restriction leaving a p fraction of variables unset shrinks the size of any circuit in the family by a factor of pΓ+o(1). Our PRG uses a seed of length s1/(Γ+1)+o(1) to fool circuits in the family of size s. By using this generic construction, we get PRGs with polynomially small error for the following classes of circuits of size s and with the following seed lengths: 1. For de Morgan formulas, seed length s1/3+o(1); 2. For formulas over an arbitrary basis, seed length s1/2+o(1); 3. For read-once de Morgan formulas, seed length s.234...; 4. For branching programs of size s, seed length s1/2+o(1). The previous best PRGs known for these classes used seeds of length bigger than n/2 to output n bits, and worked only when the size s = O(n) [BPW11]
    corecore