895 research outputs found

    Covering Irrep(Sn)Irrep(S_n) With Tensor Products and Powers

    Full text link
    We study when a tensor product of irreducible representations of the symmetric group SnS_n contains all irreducibles as subrepresentations - we say such a tensor product covers Irrep(Sn)Irrep(S_n). Our results show that this behavior is typical. We first give a general criterion for such a tensor product to have this property. Using this criterion we show that the tensor product of a constant number of random irreducibles covers Irrep(Sn)Irrep(S_n) asymptotically almost surely. We also consider, for a fixed irreducible representation, the degree of tensor power needed to cover Irrep(Sn)Irrep(S_n). We show that the simple lower bound based on dimension is tight up to a universal constant factor for every irreducible representation, as was recently conjectured by Liebeck, Shalev, and Tiep

    Free Energy Subadditivity for Symmetric Random Hamiltonians

    Full text link
    We consider a random Hamiltonian H:Σ→RH:\Sigma\to\mathbb R defined on a compact space Σ\Sigma that admits a transitive action by a compact group G\mathcal G. When the law of HH is G\mathcal G-invariant, we show its expected free energy relative to the unique G\mathcal G-invariant probability measure on Σ\Sigma obeys a subadditivity property in the law of HH itself. The bound is often tight for weak disorder and relates free energies at different temperatures when HH is a Gaussian process. Many examples are discussed including branching random walk, several spin glasses, random constraint satisfaction problems, and the random field Ising model. We also provide a generalization to quantum Hamiltonians with applications to the quantum SK and SYK models

    Approximate Ground States of Hypercube Spin Glasses are Near Corners

    Get PDF
    We show that with probability exponentially close to 11, all near-maximizers of any mean-field mixed pp-spin glass Hamiltonian on the hypercube [−1,1]N[-1,1]^N are near a corner. This confirms a recent conjecture of Gamarnik and Jagannath. The proof is elementary and generalizes to arbitrary polytopes with eo(N2)e^{o(N^2)} faces

    Incentivizing Exploration with Linear Contexts and Combinatorial Actions

    Full text link
    We advance the study of incentivized bandit exploration, in which arm choices are viewed as recommendations and are required to be Bayesian incentive compatible. Recent work has shown under certain independence assumptions that after collecting enough initial samples, the popular Thompson sampling algorithm becomes incentive compatible. We give an analog of this result for linear bandits, where the independence of the prior is replaced by a natural convexity condition. This opens up the possibility of efficient and regret-optimal incentivized exploration in high-dimensional action spaces. In the semibandit model, we also improve the sample complexity for the pre-Thompson sampling phase of initial data collection.Comment: International Conference on Machine Learning (ICML) 202

    On Size-Independent Sample Complexity of ReLU Networks

    Full text link
    We study the sample complexity of learning ReLU neural networks from the point of view of generalization. Given norm constraints on the weight matrices, a common approach is to estimate the Rademacher complexity of the associated function class. Previously Golowich-Rakhlin-Shamir (2020) obtained a bound independent of the network size (scaling with a product of Frobenius norms) except for a factor of the square-root depth. We give a refinement which often has no explicit depth-dependence at all.Comment: 4 page
    • …
    corecore