Distillation is an effective knowledge-transfer technique that uses predicted
distributions of a powerful teacher model as soft targets to train a
less-parameterized student model. A pre-trained high capacity teacher, however,
is not always available. Recently proposed online variants use the aggregated
intermediate predictions of multiple student models as targets to train each
student model. Although group-derived targets give a good recipe for
teacher-free distillation, group members are homogenized quickly with simple
aggregation functions, leading to early saturated solutions. In this work, we
propose Online Knowledge Distillation with Diverse peers (OKDDip), which
performs two-level distillation during training with multiple auxiliary peers
and one group leader. In the first-level distillation, each auxiliary peer
holds an individual set of aggregation weights generated with an
attention-based mechanism to derive its own targets from predictions of other
auxiliary peers. Learning from distinct target distributions helps to boost
peer diversity for effectiveness of group-based distillation. The second-level
distillation is performed to transfer the knowledge in the ensemble of
auxiliary peers further to the group leader, i.e., the model used for
inference. Experimental results show that the proposed framework consistently
gives better performance than state-of-the-art approaches without sacrificing
training or inference complexity, demonstrating the effectiveness of the
proposed two-level distillation framework.Comment: Accepted to AAAI-202