166 research outputs found

    Algorithms and lower bounds for de Morgan formulas of low-communication leaf gates

    Get PDF
    The class FORMULA[s]∘GFORMULA[s] \circ \mathcal{G} consists of Boolean functions computable by size-ss de Morgan formulas whose leaves are any Boolean functions from a class G\mathcal{G}. We give lower bounds and (SAT, Learning, and PRG) algorithms for FORMULA[n1.99]∘GFORMULA[n^{1.99}]\circ \mathcal{G}, for classes G\mathcal{G} of functions with low communication complexity. Let R(k)(G)R^{(k)}(\mathcal{G}) be the maximum kk-party NOF randomized communication complexity of G\mathcal{G}. We show: (1) The Generalized Inner Product function GIPnkGIP^k_n cannot be computed in FORMULA[s]∘GFORMULA[s]\circ \mathcal{G} on more than 1/2+Ξ΅1/2+\varepsilon fraction of inputs for s=o ⁣(n2(kβ‹…4kβ‹…R(k)(G)β‹…log⁑(n/Ξ΅)β‹…log⁑(1/Ξ΅))2). s = o \! \left ( \frac{n^2}{ \left(k \cdot 4^k \cdot {R}^{(k)}(\mathcal{G}) \cdot \log (n/\varepsilon) \cdot \log(1/\varepsilon) \right)^{2}} \right). As a corollary, we get an average-case lower bound for GIPnkGIP^k_n against FORMULA[n1.99]∘PTFkβˆ’1FORMULA[n^{1.99}]\circ PTF^{k-1}. (2) There is a PRG of seed length n/2+O(sβ‹…R(2)(G)β‹…log⁑(s/Ξ΅)β‹…log⁑(1/Ξ΅))n/2 + O\left(\sqrt{s} \cdot R^{(2)}(\mathcal{G}) \cdot\log(s/\varepsilon) \cdot \log (1/\varepsilon) \right) that Ξ΅\varepsilon-fools FORMULA[s]∘GFORMULA[s] \circ \mathcal{G}. For FORMULA[s]∘LTFFORMULA[s] \circ LTF, we get the better seed length O(n1/2β‹…s1/4β‹…log⁑(n)β‹…log⁑(n/Ξ΅))O\left(n^{1/2}\cdot s^{1/4}\cdot \log(n)\cdot \log(n/\varepsilon)\right). This gives the first non-trivial PRG (with seed length o(n)o(n)) for intersections of nn half-spaces in the regime where Ρ≀1/n\varepsilon \leq 1/n. (3) There is a randomized 2nβˆ’t2^{n-t}-time #\#SAT algorithm for FORMULA[s]∘GFORMULA[s] \circ \mathcal{G}, where t=Ξ©(nsβ‹…log⁑2(s)β‹…R(2)(G))1/2.t=\Omega\left(\frac{n}{\sqrt{s}\cdot\log^2(s)\cdot R^{(2)}(\mathcal{G})}\right)^{1/2}. In particular, this implies a nontrivial #SAT algorithm for FORMULA[n1.99]∘LTFFORMULA[n^{1.99}]\circ LTF. (4) The Minimum Circuit Size Problem is not in FORMULA[n1.99]∘XORFORMULA[n^{1.99}]\circ XOR. On the algorithmic side, we show that FORMULA[n1.99]∘XORFORMULA[n^{1.99}] \circ XOR can be PAC-learned in time 2O(n/log⁑n)2^{O(n/\log n)}

    Efficient Algorithms for Privately Releasing Marginals via Convex Relaxations

    Full text link
    Consider a database of nn people, each represented by a bit-string of length dd corresponding to the setting of dd binary attributes. A kk-way marginal query is specified by a subset SS of kk attributes, and a ∣S∣|S|-dimensional binary vector Ξ²\beta specifying their values. The result for this query is a count of the number of people in the database whose attribute vector restricted to SS agrees with Ξ²\beta. Privately releasing approximate answers to a set of kk-way marginal queries is one of the most important and well-motivated problems in differential privacy. Information theoretically, the error complexity of marginal queries is well-understood: the per-query additive error is known to be at least Ξ©(min⁑{n,dk2})\Omega(\min\{\sqrt{n},d^{\frac{k}{2}}\}) and at most O~(min⁑{nd1/4,dk2})\tilde{O}(\min\{\sqrt{n} d^{1/4},d^{\frac{k}{2}}\}). However, no polynomial time algorithm with error complexity as low as the information theoretic upper bound is known for small nn. In this work we present a polynomial time algorithm that, for any distribution on marginal queries, achieves average error at most O~(nd⌈k/2βŒ‰4)\tilde{O}(\sqrt{n} d^{\frac{\lceil k/2 \rceil}{4}}). This error bound is as good as the best known information theoretic upper bounds for k=2k=2. This bound is an improvement over previous work on efficiently releasing marginals when kk is small and when error o(n)o(n) is desirable. Using private boosting we are also able to give nearly matching worst-case error bounds. Our algorithms are based on the geometric techniques of Nikolov, Talwar, and Zhang. The main new ingredients are convex relaxations and careful use of the Frank-Wolfe algorithm for constrained convex minimization. To design our relaxations, we rely on the Grothendieck inequality from functional analysis
    • …
    corecore