Maximum Variance Unfolding (MVU) and its variants have been very successful
in embedding data-manifolds in lower dimensional spaces, often revealing the
true intrinsic dimension. In this paper we show how to also incorporate
supervised class information into an MVU-like method without breaking its
convexity. We call this method the Isometric Separation Map and we show that
the resulting kernel matrix can be used as a binary/multiclass Support Vector
Machine-like method in a semi-supervised (transductive) framework. We also show
that the method always finds a kernel matrix that linearly separates the
training data exactly without projecting them in infinite dimensional spaces.
In traditional SVMs we choose a kernel and hope that the data become linearly
separable in the kernel space. In this paper we show how the hyperplane can be
chosen ad-hoc and the kernel is trained so that data are always linearly
separable. Comparisons with Large Margin SVMs show comparable performance.Comment: Submitted to the NIPS workshop on Kernel Learning:Automatic Selection
Of Kernels and now presented in MLSP 200