895 research outputs found
Covering With Tensor Products and Powers
We study when a tensor product of irreducible representations of the
symmetric group contains all irreducibles as subrepresentations - we say
such a tensor product covers . Our results show that this behavior
is typical. We first give a general criterion for such a tensor product to have
this property. Using this criterion we show that the tensor product of a
constant number of random irreducibles covers asymptotically
almost surely. We also consider, for a fixed irreducible representation, the
degree of tensor power needed to cover . We show that the simple
lower bound based on dimension is tight up to a universal constant factor for
every irreducible representation, as was recently conjectured by Liebeck,
Shalev, and Tiep
Free Energy Subadditivity for Symmetric Random Hamiltonians
We consider a random Hamiltonian defined on a compact
space that admits a transitive action by a compact group .
When the law of is -invariant, we show its expected free energy
relative to the unique -invariant probability measure on
obeys a subadditivity property in the law of itself. The bound is often
tight for weak disorder and relates free energies at different temperatures
when is a Gaussian process. Many examples are discussed including branching
random walk, several spin glasses, random constraint satisfaction problems, and
the random field Ising model. We also provide a generalization to quantum
Hamiltonians with applications to the quantum SK and SYK models
Approximate Ground States of Hypercube Spin Glasses are Near Corners
We show that with probability exponentially close to , all near-maximizers
of any mean-field mixed -spin glass Hamiltonian on the hypercube
are near a corner. This confirms a recent conjecture of Gamarnik and Jagannath.
The proof is elementary and generalizes to arbitrary polytopes with
faces
Incentivizing Exploration with Linear Contexts and Combinatorial Actions
We advance the study of incentivized bandit exploration, in which arm choices
are viewed as recommendations and are required to be Bayesian incentive
compatible. Recent work has shown under certain independence assumptions that
after collecting enough initial samples, the popular Thompson sampling
algorithm becomes incentive compatible. We give an analog of this result for
linear bandits, where the independence of the prior is replaced by a natural
convexity condition. This opens up the possibility of efficient and
regret-optimal incentivized exploration in high-dimensional action spaces. In
the semibandit model, we also improve the sample complexity for the
pre-Thompson sampling phase of initial data collection.Comment: International Conference on Machine Learning (ICML) 202
On Size-Independent Sample Complexity of ReLU Networks
We study the sample complexity of learning ReLU neural networks from the
point of view of generalization. Given norm constraints on the weight matrices,
a common approach is to estimate the Rademacher complexity of the associated
function class. Previously Golowich-Rakhlin-Shamir (2020) obtained a bound
independent of the network size (scaling with a product of Frobenius norms)
except for a factor of the square-root depth. We give a refinement which often
has no explicit depth-dependence at all.Comment: 4 page
Property: Eminent Domain and Restoring Access to Parcels Isolated by Highway Reconstruction: Finding the Public Use—State Ex. Rel. Commissioner of Transportation v. Kettleson
Property: Eminent Domain and Restoring Access to Parcels Isolated by Highway Reconstruction: Finding the Public Use—State Ex. Rel. Commissioner of Transportation v. Kettleson
- …