5,006 research outputs found
Grouped Knowledge Distillation for Deep Face Recognition
Compared with the feature-based distillation methods, logits distillation can
liberalize the requirements of consistent feature dimension between teacher and
student networks, while the performance is deemed inferior in face recognition.
One major challenge is that the light-weight student network has difficulty
fitting the target logits due to its low model capacity, which is attributed to
the significant number of identities in face recognition. Therefore, we seek to
probe the target logits to extract the primary knowledge related to face
identity, and discard the others, to make the distillation more achievable for
the student network. Specifically, there is a tail group with near-zero values
in the prediction, containing minor knowledge for distillation. To provide a
clear perspective of its impact, we first partition the logits into two groups,
i.e., Primary Group and Secondary Group, according to the cumulative
probability of the softened prediction. Then, we reorganize the Knowledge
Distillation (KD) loss of grouped logits into three parts, i.e., Primary-KD,
Secondary-KD, and Binary-KD. Primary-KD refers to distilling the primary
knowledge from the teacher, Secondary-KD aims to refine minor knowledge but
increases the difficulty of distillation, and Binary-KD ensures the consistency
of knowledge distribution between teacher and student. We experimentally found
that (1) Primary-KD and Binary-KD are indispensable for KD, and (2)
Secondary-KD is the culprit restricting KD at the bottleneck. Therefore, we
propose a Grouped Knowledge Distillation (GKD) that retains the Primary-KD and
Binary-KD but omits Secondary-KD in the ultimate KD loss calculation. Extensive
experimental results on popular face recognition benchmarks demonstrate the
superiority of proposed GKD over state-of-the-art methods.Comment: 9 pages, 2 figures, 7 tables, accepted by AAAI 202
- …