Dimensionality reduction by minimizing nearest-neighbor classification error

Abstract

There is a great interest in dimensionality reduction techniques for tackling the problem of high-dimensional pattern classification. This paper addresses the topic of supervised learning of a linear dimension reduction mapping suitable for classification problems. The proposed optimization procedure is based on minimizing an estimation of the nearest neighbor classifier error probability, and it learns a linear projection and a small set of prototypes that support the class boundaries. The learned classifier has the property of being very computationally efficient, making the classification much faster than state-of-the-art classifiers, such as SVMs, while having competitive recognition accuracy. The approach has been assessed through a series of experiments, showing a uniformly good behavior, and competitive compared with some recently proposed supervised dimensionality reduction techniques. © 2010 Elsevier B.V. All rights reserved.Work partially supported by the Spanish projects TIN2008-04571 and Consolider Ingenio 2010: MIPRCV (CSD2007-00018).Villegas, M.; Paredes Palacios, R. (2011). Dimensionality reduction by minimizing nearest-neighbor classification error. Pattern Recognition Letters. 32(4):633-639. https://doi.org/10.1016/j.patrec.2010.12.002S63363932

    Similar works

    Full text

    thumbnail-image

    Available Versions

    Last time updated on 01/04/2019