1 research outputs found
On the Connection Between Learning Two-Layers Neural Networks and Tensor Decomposition
We establish connections between the problem of learning a two-layer neural
network and tensor decomposition. We consider a model with feature vectors
, hidden units with weights and output , i.e., , with activation functions
given by low-degree polynomials. In particular, if , we prove that no polynomial-time learning algorithm can
outperform the trivial predictor that assigns to each example the response
variable , when . Our conclusion holds for a
`natural data distribution', namely standard Gaussian feature vectors
, and output distributed according to a two-layer neural network
with random isotropic weights, and under a certain complexity-theoretic
assumption on tensor decomposition. Roughly speaking, we assume that no
polynomial-time algorithm can substantially outperform current methods for
tensor decomposition based on the sum-of-squares hierarchy.
We also prove generalizations of this statement for higher degree polynomial
activations, and non-random weight vectors. Remarkably, several existing
algorithms for learning two-layer networks with rigorous guarantees are based
on tensor decomposition. Our results support the idea that this is indeed the
core computational difficulty in learning such networks, under the stated
generative model for the data. As a side result, we show that under this model
learning the network requires accurate learning of its weights, a property that
does not hold in a more general setting.Comment: 41 pages, 1 figur