16 research outputs found
Watch your Up-Convolution: CNN Based Generative Deep Neural Networks are Failing to Reproduce Spectral Distributions
Generative convolutional deep neural networks, e.g. popular GAN
architectures, are relying on convolution based up-sampling methods to produce
non-scalar outputs like images or video sequences. In this paper, we show that
common up-sampling methods, i.e. known as up-convolution or transposed
convolution, are causing the inability of such models to reproduce spectral
distributions of natural training data correctly. This effect is independent of
the underlying architecture and we show that it can be used to easily detect
generated data like deepfakes with up to 100% accuracy on public benchmarks.
To overcome this drawback of current generative models, we propose to add a
novel spectral regularization term to the training optimization objective. We
show that this approach not only allows to train spectral consistent GANs that
are avoiding high frequency errors. Also, we show that a correct approximation
of the frequency spectrum has positive effects on the training stability and
output quality of generative networks
DeepEMD: Differentiable Earth Mover's Distance for Few-Shot Learning
Deep learning has proved to be very effective in learning with a large amount
of labelled data. Few-shot learning in contrast attempts to learn with only a
few labelled data. In this work, we develop methods for few-shot image
classification from a new perspective of optimal matching between image
regions. We employ the Earth Mover's Distance (EMD) as a metric to compute a
structural distance between dense image representations to determine image
relevance. The EMD generates the optimal matching flows between structural
elements that have the minimum matching cost, which is used to calculate the
image distance for classification. To generate the important weights of
elements in the EMD formulation, we design a cross-reference mechanism, which
can effectively alleviate the adverse impact caused by the cluttered background
and large intra-class appearance variations. To handle -shot classification,
we propose to learn a structured fully connected layer that can directly
classify dense image representations with the proposed EMD. Based on the
implicit function theorem, the EMD can be inserted as a layer into the network
for end-to-end training. Our extensive experiments validate the effectiveness
of our algorithm which outperforms state-of-the-art methods by a significant
margin on four widely used few-shot classification benchmarks, namely,
miniImageNet, tieredImageNet, Fewshot-CIFAR100 (FC100) and Caltech-UCSD
Birds-200-2011 (CUB).Comment: Extended version of DeepEMD in CVPR2020 (oral