10,623 research outputs found

    Spectral norm of random tensors

    Full text link
    We show that the spectral norm of a random n1Γ—n2Γ—β‹―Γ—nKn_1\times n_2\times \cdots \times n_K tensor (or higher-order array) scales as O((βˆ‘k=1Knk)log⁑(K))O\left(\sqrt{(\sum_{k=1}^{K}n_k)\log(K)}\right) under some sub-Gaussian assumption on the entries. The proof is based on a covering number argument. Since the spectral norm is dual to the tensor nuclear norm (the tightest convex relaxation of the set of rank one tensors), the bound implies that the convex relaxation yields sample complexity that is linear in (the sum of) the number of dimensions, which is much smaller than other recently proposed convex relaxations of tensor rank that use unfolding.Comment: 5 page

    Spectral Norm of Symmetric Functions

    Full text link
    The spectral norm of a Boolean function f:{0,1}nβ†’{βˆ’1,1}f:\{0,1\}^n \to \{-1,1\} is the sum of the absolute values of its Fourier coefficients. This quantity provides useful upper and lower bounds on the complexity of a function in areas such as learning theory, circuit complexity, and communication complexity. In this paper, we give a combinatorial characterization for the spectral norm of symmetric functions. We show that the logarithm of the spectral norm is of the same order of magnitude as r(f)log⁑(n/r(f))r(f)\log(n/r(f)) where r(f)=max⁑{r0,r1}r(f) = \max\{r_0,r_1\}, and r0r_0 and r1r_1 are the smallest integers less than n/2n/2 such that f(x)f(x) or f(x)β‹…parity(x)f(x) \cdot parity(x) is constant for all xx with βˆ‘xi∈[r0,nβˆ’r1]\sum x_i \in [r_0, n-r_1]. We mention some applications to the decision tree and communication complexity of symmetric functions

    Spectral Norm Regularization for Improving the Generalizability of Deep Learning

    Full text link
    We investigate the generalizability of deep learning based on the sensitivity to input perturbation. We hypothesize that the high sensitivity to the perturbation of data degrades the performance on it. To reduce the sensitivity to perturbation, we propose a simple and effective regularization method, referred to as spectral norm regularization, which penalizes the high spectral norm of weight matrices in neural networks. We provide supportive evidence for the abovementioned hypothesis by experimentally confirming that the models trained using spectral norm regularization exhibit better generalizability than other baseline methods

    Boolean functions with small spectral norm

    Full text link
    Suppose that f is a boolean function from F_2^n to {0,1} with spectral norm (that is the sum of the absolute values of its Fourier coefficients) at most M. We show that f may be expressed as +/- 1 combination of at most 2^(2^(O(M^4))) indicator functions of subgroups of F_2^n.Comment: 17 pp. Updated references

    Circulant matrices: norm, powers, and positivity

    Get PDF
    In their recent paper "The spectral norm of a Horadam circulant matrix", Merikoski, Haukkanen, Mattila and Tossavainen study under which conditions the spectral norm of a general real circulant matrix C{\bf C} equals the modulus of its row/column sum. We improve on their sufficient condition until we have a necessary one. Our results connect the above problem to positivity of sufficiently high powers of the matrix C⊀C{\bf C^\top C}. We then generalize the result to complex circulant matrices
    • …
    corecore