36,638 research outputs found
Speeding up Convolutional Neural Networks with Low Rank Expansions
The focus of this paper is speeding up the evaluation of convolutional neural
networks. While delivering impressive results across a range of computer vision
and machine learning tasks, these networks are computationally demanding,
limiting their deployability. Convolutional layers generally consume the bulk
of the processing time, and so in this work we present two simple schemes for
drastically speeding up these layers. This is achieved by exploiting
cross-channel or filter redundancy to construct a low rank basis of filters
that are rank-1 in the spatial domain. Our methods are architecture agnostic,
and can be easily applied to existing CPU and GPU convolutional frameworks for
tuneable speedup performance. We demonstrate this with a real world network
designed for scene text character recognition, showing a possible 2.5x speedup
with no loss in accuracy, and 4.5x speedup with less than 1% drop in accuracy,
still achieving state-of-the-art on standard benchmarks
A Unified Approximation Framework for Compressing and Accelerating Deep Neural Networks
Deep neural networks (DNNs) have achieved significant success in a variety of
real world applications, i.e., image classification. However, tons of
parameters in the networks restrict the efficiency of neural networks due to
the large model size and the intensive computation. To address this issue,
various approximation techniques have been investigated, which seek for a light
weighted network with little performance degradation in exchange of smaller
model size or faster inference. Both low-rankness and sparsity are appealing
properties for the network approximation. In this paper we propose a unified
framework to compress the convolutional neural networks (CNNs) by combining
these two properties, while taking the nonlinear activation into consideration.
Each layer in the network is approximated by the sum of a structured sparse
component and a low-rank component, which is formulated as an optimization
problem. Then, an extended version of alternating direction method of
multipliers (ADMM) with guaranteed convergence is presented to solve the
relaxed optimization problem. Experiments are carried out on VGG-16, AlexNet
and GoogLeNet with large image classification datasets. The results outperform
previous work in terms of accuracy degradation, compression rate and speedup
ratio. The proposed method is able to remarkably compress the model (with up to
4.9x reduction of parameters) at a cost of little loss or without loss on
accuracy.Comment: 8 pages, 5 figures, 6 table
- …