Model compression has become necessary when applying neural networks (NN)
into many real application tasks that can accept slightly-reduced model
accuracy with strict tolerance to model complexity. Recently, Knowledge
Distillation, which distills the knowledge from well-trained and highly complex
teacher model into a compact student model, has been widely used for model
compression. However, under the strict requirement on the resource cost, it is
quite challenging to achieve comparable performance with the teacher model,
essentially due to the drastically-reduced expressiveness ability of the
compact student model. Inspired by the nature of the expressiveness ability in
Neural Networks, we propose to use multi-segment activation, which can
significantly improve the expressiveness ability with very little cost, in the
compact student model. Specifically, we propose a highly efficient
multi-segment activation, called Light Multi-segment Activation (LMA), which
can rapidly produce multiple linear regions with very few parameters by
leveraging the statistical information. With using LMA, the compact student
model is capable of achieving much better performance effectively and
efficiently, than the ReLU-equipped one with same model scale. Furthermore, the
proposed method is compatible with other model compression techniques, such as
quantization, which means they can be used jointly for better compression
performance. Experiments on state-of-the-art NN architectures over the
real-world tasks demonstrate the effectiveness and extensibility of the LMA