5 research outputs found
Knowledge Representing: Efficient, Sparse Representation of Prior Knowledge for Knowledge Distillation
Despite the recent works on knowledge distillation (KD) have achieved a
further improvement through elaborately modeling the decision boundary as the
posterior knowledge, their performance is still dependent on the hypothesis
that the target network has a powerful capacity (representation ability). In
this paper, we propose a knowledge representing (KR) framework mainly focusing
on modeling the parameters distribution as prior knowledge. Firstly, we suggest
a knowledge aggregation scheme in order to answer how to represent the prior
knowledge from teacher network. Through aggregating the parameters distribution
from teacher network into more abstract level, the scheme is able to alleviate
the phenomenon of residual accumulation in the deeper layers. Secondly, as the
critical issue of what the most important prior knowledge is for better
distilling, we design a sparse recoding penalty for constraining the student
network to learn with the penalized gradients. With the proposed penalty, the
student network can effectively avoid the over-regularization during knowledge
distilling and converge faster. The quantitative experiments exhibit that the
proposed framework achieves the state-ofthe-arts performance, even though the
target network does not have the expected capacity. Moreover, the framework is
flexible enough for combining with other KD methods based on the posterior
knowledge