Despite achieving state-of-the-art performance, deep learning methods
generally require a large amount of labeled data during training and may suffer
from overfitting when the sample size is small. To ensure good generalizability
of deep networks under small sample sizes, learning discriminative features is
crucial. To this end, several loss functions have been proposed to encourage
large intra-class compactness and inter-class separability. In this paper, we
propose to enhance the discriminative power of features from a new perspective
by introducing a novel neural network termed Relation-and-Margin learning
Network (ReMarNet). Our method assembles two networks of different backbones so
as to learn the features that can perform excellently in both of the
aforementioned two classification mechanisms. Specifically, a relation network
is used to learn the features that can support classification based on the
similarity between a sample and a class prototype; at the meantime, a fully
connected network with the cross entropy loss is used for classification via
the decision boundary. Experiments on four image datasets demonstrate that our
approach is effective in learning discriminative features from a small set of
labeled samples and achieves competitive performance against state-of-the-art
methods. Codes are available at https://github.com/liyunyu08/ReMarNet.Comment: IEEE TCSVT 202