Deep learning has achieved impressive success in a variety of fields because
of its good generalization. However, it has been a challenging problem to
quickly train a neural network with a large number of layers. The existing
works utilize the locality-sensitive hashing technique or some data structures
on space partitioning to alleviate the training cost in each iteration. In this
work, we try accelerating the computations in each iteration from the
perspective of input data points. Specifically, for a two-layer fully connected
neural network, when the training data have some special properties, e.g.,
Kronecker structure, each iteration can be completed in sublinear time in the
data dimension