Deep perceptual loss is a type of loss function in computer vision that aims
to mimic human perception by using the deep features extracted from neural
networks. In recent years, the method has been applied to great effect on a
host of interesting computer vision tasks, especially for tasks with image or
image-like outputs, such as image synthesis, segmentation, depth prediction,
and more. Many applications of the method use pretrained networks, often
convolutional networks, for loss calculation. Despite the increased interest
and broader use, more effort is needed toward exploring which networks to use
for calculating deep perceptual loss and from which layers to extract the
features.
This work aims to rectify this by systematically evaluating a host of
commonly used and readily available, pretrained networks for a number of
different feature extraction points on four existing use cases of deep
perceptual loss. The use cases of perceptual similarity, super-resolution,
image segmentation, and dimensionality reduction, are evaluated through
benchmarks. The benchmarks are implementations of previous works where the
selected networks and extraction points are evaluated. The performance on the
benchmarks, and attributes of the networks and extraction points are then used
as a basis for an in-depth analysis. This analysis uncovers insight regarding
which architectures provide superior performance for deep perceptual loss and
how to choose an appropriate extraction point for a particular task and
dataset. Furthermore, the work discusses the implications of the results for
deep perceptual loss and the broader field of transfer learning. The results
show that deep perceptual loss deviates from two commonly held conventions in
transfer learning, which suggests that those conventions are in need of deeper
analysis