39 research outputs found

    Spectral Norm of Symmetric Functions

    Full text link
    The spectral norm of a Boolean function f:{0,1}n{1,1}f:\{0,1\}^n \to \{-1,1\} is the sum of the absolute values of its Fourier coefficients. This quantity provides useful upper and lower bounds on the complexity of a function in areas such as learning theory, circuit complexity, and communication complexity. In this paper, we give a combinatorial characterization for the spectral norm of symmetric functions. We show that the logarithm of the spectral norm is of the same order of magnitude as r(f)log(n/r(f))r(f)\log(n/r(f)) where r(f)=max{r0,r1}r(f) = \max\{r_0,r_1\}, and r0r_0 and r1r_1 are the smallest integers less than n/2n/2 such that f(x)f(x) or f(x)parity(x)f(x) \cdot parity(x) is constant for all xx with xi[r0,nr1]\sum x_i \in [r_0, n-r_1]. We mention some applications to the decision tree and communication complexity of symmetric functions

    Statistical Query Algorithms for Mean Vector Estimation and Stochastic Convex Optimization

    Get PDF
    Stochastic convex optimization, by which the objective is the expectation of a random convex function, is an important and widely used method with numerous applications in machine learning, statistics, operations research, and other areas. We study the complexity of stochastic convex optimization given only statistical query (SQ) access to the objective function. We show that well-known and popular first-order iterative methods can be implemented using only statistical queries. For many cases of interest, we derive nearly matching upper and lower bounds on the estimation (sample) complexity, including linear optimization in the most general setting. We then present several consequences for machine learning, differential privacy, and proving concrete lower bounds on the power of convex optimization–based methods. The key ingredient of our work is SQ algorithms and lower bounds for estimating the mean vector of a distribution over vectors supported on a convex body in Rd. This natural problem has not been previously studied, and we show that our solutions can be used to get substantially improved SQ versions of Perceptron and other online algorithms for learning halfspaces
    corecore