3 research outputs found
AntiDote: Attention-based Dynamic Optimization for Neural Network Runtime Efficiency
Convolutional Neural Networks (CNNs) achieved great cognitive performance at
the expense of considerable computation load. To relieve the computation load,
many optimization works are developed to reduce the model redundancy by
identifying and removing insignificant model components, such as weight
sparsity and filter pruning. However, these works only evaluate model
components' static significance with internal parameter information, ignoring
their dynamic interaction with external inputs. With per-input feature
activation, the model component significance can dynamically change, and thus
the static methods can only achieve sub-optimal results. Therefore, we propose
a dynamic CNN optimization framework in this work. Based on the neural network
attention mechanism, we propose a comprehensive dynamic optimization framework
including (1) testing-phase channel and column feature map pruning, as well as
(2) training-phase optimization by targeted dropout. Such a dynamic
optimization framework has several benefits: (1) First, it can accurately
identify and aggressively remove per-input feature redundancy with considering
the model-input interaction; (2) Meanwhile, it can maximally remove the feature
map redundancy in various dimensions thanks to the multi-dimension flexibility;
(3) The training-testing co-optimization favors the dynamic pruning and helps
maintain the model accuracy even with very high feature pruning ratio.
Extensive experiments show that our method could bring 37.4% to 54.5% FLOPs
reduction with negligible accuracy drop on various of test networks.Comment: Accepted in DATE'2020 (Best Paper Nomination