The excellent performance of deep neural networks has enabled us to solve
several automatization problems, opening an era of autonomous devices. However,
current deep net architectures are heavy with millions of parameters and
require billions of floating point operations. Several works have been
developed to compress a pre-trained deep network to reduce memory footprint
and, possibly, computation. Instead of compressing a pre-trained network, in
this work, we propose a generic neural network layer structure employing
multilinear projection as the primary feature extractor. The proposed
architecture requires several times less memory as compared to the traditional
Convolutional Neural Networks (CNN), while inherits the similar design
principles of a CNN. In addition, the proposed architecture is equipped with
two computation schemes that enable computation reduction or scalability.
Experimental results show the effectiveness of our compact projection that
outperforms traditional CNN, while requiring far fewer parameters.Comment: 10 pages, 3 figure