Traditional knowledge distillation uses a two-stage training strategy to
transfer knowledge from a high-capacity teacher model to a compact student
model, which relies heavily on the pre-trained teacher. Recent online knowledge
distillation alleviates this limitation by collaborative learning, mutual
learning and online ensembling, following a one-stage end-to-end training
fashion. However, collaborative learning and mutual learning fail to construct
an online high-capacity teacher, whilst online ensembling ignores the
collaboration among branches and its logit summation impedes the further
optimisation of the ensemble teacher. In this work, we propose a novel Peer
Collaborative Learning method for online knowledge distillation, which
integrates online ensembling and network collaboration into a unified
framework. Specifically, given a target network, we construct a multi-branch
network for training, in which each branch is called a peer. We perform random
augmentation multiple times on the inputs to peers and assemble feature
representations outputted from peers with an additional classifier as the peer
ensemble teacher. This helps to transfer knowledge from a high-capacity teacher
to peers, and in turn further optimises the ensemble teacher. Meanwhile, we
employ the temporal mean model of each peer as the peer mean teacher to
collaboratively transfer knowledge among peers, which helps each peer to learn
richer knowledge and facilitates to optimise a more stable model with better
generalisation. Extensive experiments on CIFAR-10, CIFAR-100 and ImageNet show
that the proposed method significantly improves the generalisation of various
backbone networks and outperforms the state-of-the-art methods