6,670 research outputs found

    Boolean functions with small spectral norm

    Full text link
    Suppose that f is a boolean function from F_2^n to {0,1} with spectral norm (that is the sum of the absolute values of its Fourier coefficients) at most M. We show that f may be expressed as +/- 1 combination of at most 2^(2^(O(M^4))) indicator functions of subgroups of F_2^n.Comment: 17 pp. Updated references

    Polynomial Threshold Functions, AC^0 Functions and Spectral Norms

    Get PDF
    The class of polynomial-threshold functions is studied using harmonic analysis, and the results are used to derive lower bounds related to AC^0 functions. A Boolean function is polynomial threshold if it can be represented as a sign function of a sparse polynomial (one that consists of a polynomial number of terms). The main result is that polynomial-threshold functions can be characterized by means of their spectral representation. In particular, it is proved that a Boolean function whose L_1 spectral norm is bounded by a polynomial in n is a polynomial-threshold function, and that a Boolean function whose L_∞^(-1) spectral norm is not bounded by a polynomial in n is not a polynomial-threshold function. Some results for AC^0 functions are derived

    Fourier sparsity, spectral norm, and the Log-rank conjecture

    Full text link
    We study Boolean functions with sparse Fourier coefficients or small spectral norm, and show their applications to the Log-rank Conjecture for XOR functions f(x\oplus y) --- a fairly large class of functions including well studied ones such as Equality and Hamming Distance. The rank of the communication matrix M_f for such functions is exactly the Fourier sparsity of f. Let d be the F2-degree of f and D^CC(f) stand for the deterministic communication complexity for f(x\oplus y). We show that 1. D^CC(f) = O(2^{d^2/2} log^{d-2} ||\hat f||_1). In particular, the Log-rank conjecture holds for XOR functions with constant F2-degree. 2. D^CC(f) = O(d ||\hat f||_1) = O(\sqrt{rank(M_f)}\logrank(M_f)). We obtain our results through a degree-reduction protocol based on a variant of polynomial rank, and actually conjecture that its communication cost is already \log^{O(1)}rank(M_f). The above bounds also hold for the parity decision tree complexity of f, a measure that is no less than the communication complexity (up to a factor of 2). Along the way we also show several structural results about Boolean functions with small F2-degree or small spectral norm, which could be of independent interest. For functions f with constant F2-degree: 1) f can be written as the summation of quasi-polynomially many indicator functions of subspaces with \pm-signs, improving the previous doubly exponential upper bound by Green and Sanders; 2) being sparse in Fourier domain is polynomially equivalent to having a small parity decision tree complexity; 3) f depends only on polylog||\hat f||_1 linear functions of input variables. For functions f with small spectral norm: 1) there is an affine subspace with co-dimension O(||\hat f||_1) on which f is a constant; 2) there is a parity decision tree with depth O(||\hat f||_1 log ||\hat f||_0).Comment: v2: Corollary 31 of v1 removed because of a bug in the proof. (Other results not affected.

    Categorical invariance and structural complexity in human concept learning

    Get PDF
    An alternative account of human concept learning based on an invariance measure of the categorical\ud stimulus is proposed. The categorical invariance model (CIM) characterizes the degree of structural\ud complexity of a Boolean category as a function of its inherent degree of invariance and its cardinality or\ud size. To do this we introduce a mathematical framework based on the notion of a Boolean differential\ud operator on Boolean categories that generates the degrees of invariance (i.e., logical manifold) of the\ud category in respect to its dimensions. Using this framework, we propose that the structural complexity\ud of a Boolean category is indirectly proportional to its degree of categorical invariance and directly\ud proportional to its cardinality or size. Consequently, complexity and invariance notions are formally\ud unified to account for concept learning difficulty. Beyond developing the above unifying mathematical\ud framework, the CIM is significant in that: (1) it precisely predicts the key learning difficulty ordering of\ud the SHJ [Shepard, R. N., Hovland, C. L.,&Jenkins, H. M. (1961). Learning and memorization of classifications.\ud Psychological Monographs: General and Applied, 75(13), 1-42] Boolean category types consisting of three\ud binary dimensions and four positive examples; (2) it is, in general, a good quantitative predictor of the\ud degree of learning difficulty of a large class of categories (in particular, the 41 category types studied\ud by Feldman [Feldman, J. (2000). Minimization of Boolean complexity in human concept learning. Nature,\ud 407, 630-633]); (3) it is, in general, a good quantitative predictor of parity effects for this large class of\ud categories; (4) it does all of the above without free parameters; and (5) it is cognitively plausible (e.g.,\ud cognitively tractable)

    Sub-linear Upper Bounds on Fourier dimension of Boolean Functions in terms of Fourier sparsity

    Full text link
    We prove that the Fourier dimension of any Boolean function with Fourier sparsity ss is at most O(s2/3)O\left(s^{2/3}\right). Our proof method yields an improved bound of O~(s)\widetilde{O}(\sqrt{s}) assuming a conjecture of Tsang~\etal~\cite{tsang}, that for every Boolean function of sparsity ss there is an affine subspace of F2n\mathbb{F}_2^n of co-dimension O(\poly\log s) restricted to which the function is constant. This conjectured bound is tight upto poly-logarithmic factors as the Fourier dimension and sparsity of the address function are quadratically separated. We obtain these bounds by observing that the Fourier dimension of a Boolean function is equivalent to its non-adaptive parity decision tree complexity, and then bounding the latter

    Negative weights make adversaries stronger

    Full text link
    The quantum adversary method is one of the most successful techniques for proving lower bounds on quantum query complexity. It gives optimal lower bounds for many problems, has application to classical complexity in formula size lower bounds, and is versatile with equivalent formulations in terms of weight schemes, eigenvalues, and Kolmogorov complexity. All these formulations rely on the principle that if an algorithm successfully computes a function then, in particular, it is able to distinguish between inputs which map to different values. We present a stronger version of the adversary method which goes beyond this principle to make explicit use of the stronger condition that the algorithm actually computes the function. This new method, which we call ADV+-, has all the advantages of the old: it is a lower bound on bounded-error quantum query complexity, its square is a lower bound on formula size, and it behaves well with respect to function composition. Moreover ADV+- is always at least as large as the adversary method ADV, and we show an example of a monotone function for which ADV+-(f)=Omega(ADV(f)^1.098). We also give examples showing that ADV+- does not face limitations of ADV like the certificate complexity barrier and the property testing barrier.Comment: 29 pages, v2: added automorphism principle, extended to non-boolean functions, simplified examples, added matching upper bound for AD

    Lower Bounds on Quantum Query Complexity

    Full text link
    Shor's and Grover's famous quantum algorithms for factoring and searching show that quantum computers can solve certain computational problems significantly faster than any classical computer. We discuss here what quantum computers_cannot_ do, and specifically how to prove limits on their computational power. We cover the main known techniques for proving lower bounds, and exemplify and compare the methods.Comment: survey, 23 page
    • …
    corecore