It is often tedious and expensive to label large training datasets for learning-based image classification. This problem can be alleviated by self-supervised learning techniques, which take a hybrid of labeled and unlabeled data to train classifiers. However, the feature dimension is usually very high (typically from tens to several hundreds). The learning is afflicted by the curse of dimensionality as the search space grows exponentially with the dimension. Discriminant-EM (DEM) proposed a framework for such tasks by applying self-supervised learning in an optimal discriminating subspace of the original feature space. However, the algorithm is limited by its linear transformation structure which cannot capture the non-linearity in the class distribution. This paper extends the linear DEM to a nonlinear kernel algorithm, Kernel DEM (KDEM) based on kernel multiple discriminant analysis (KMDA). KMDA provides better ability to simplify the probabilistic structures of data distribution in a discriminating subspace. KMDA and KDEM are evaluated on both benchmark databases and synthetic data. Experimental results show that classifier using KMDA is comparable with support vector machine (SVM) on standard benchmark test, and KDEM outperforms a variety o
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.