Generative Sliced MMD Flows with Riesz Kernels

Abstract

Maximum mean discrepancy (MMD) flows suffer from high computational costs in large scale computations. In this paper, we show that MMD flows with Riesz kernels K(x,y)=xyrK(x,y) = - \|x-y\|^r, r(0,2)r \in (0,2) have exceptional properties which allow for their efficient computation. First, the MMD of Riesz kernels coincides with the MMD of their sliced version. As a consequence, the computation of gradients of MMDs can be performed in the one-dimensional setting. Here, for r=1r=1, a simple sorting algorithm can be applied to reduce the complexity from O(MN+N2)O(MN+N^2) to O((M+N)log(M+N))O((M+N)\log(M+N)) for two empirical measures with MM and NN support points. For the implementations we approximate the gradient of the sliced MMD by using only a finite number PP of slices. We show that the resulting error has complexity O(d/P)O(\sqrt{d/P}), where dd is the data dimension. These results enable us to train generative models by approximating MMD gradient flows by neural networks even for large scale applications. We demonstrate the efficiency of our model by image generation on MNIST, FashionMNIST and CIFAR10

    Similar works

    Full text

    thumbnail-image

    Available Versions