5 research outputs found
Dimensionality Reduction in Deep Learning via Kronecker Multi-layer Architectures
Deep learning using neural networks is an effective technique for generating
models of complex data. However, training such models can be expensive when
networks have large model capacity resulting from a large number of layers and
nodes. For training in such a computationally prohibitive regime,
dimensionality reduction techniques ease the computational burden, and allow
implementations of more robust networks. We propose a novel type of such
dimensionality reduction via a new deep learning architecture based on fast
matrix multiplication of a Kronecker product decomposition; in particular our
network construction can be viewed as a Kronecker product-induced
sparsification of an "extended" fully connected network. Analysis and practical
examples show that this architecture allows a neural network to be trained and
implemented with a significant reduction in computational time and resources,
while achieving a similar error level compared to a traditional feedforward
neural network.Comment: 24 pages, 29 figure