1 research outputs found
Exemplar Normalization for Learning Deep Representation
Normalization techniques are important in different advanced neural networks
and different tasks. This work investigates a novel dynamic
learning-to-normalize (L2N) problem by proposing Exemplar Normalization (EN),
which is able to learn different normalization methods for different
convolutional layers and image samples of a deep network. EN significantly
improves flexibility of the recently proposed switchable normalization (SN),
which solves a static L2N problem by linearly combining several normalizers in
each normalization layer (the combination is the same for all samples). Instead
of directly employing a multi-layer perceptron (MLP) to learn data-dependent
parameters as conditional batch normalization (cBN) did, the internal
architecture of EN is carefully designed to stabilize its optimization, leading
to many appealing benefits. (1) EN enables different convolutional layers,
image samples, categories, benchmarks, and tasks to use different normalization
methods, shedding light on analyzing them in a holistic view. (2) EN is
effective for various network architectures and tasks. (3) It could replace any
normalization layers in a deep network and still produce stable model training.
Extensive experiments demonstrate the effectiveness of EN in a wide spectrum of
tasks including image recognition, noisy label learning, and semantic
segmentation. For example, by replacing BN in the ordinary ResNet50,
improvement produced by EN is 300% more than that of SN on both ImageNet and
the noisy WebVision dataset.Comment: Accepted by CVPR2020, normalization methods, image classificatio