103 research outputs found
A K Nearest Classifier design
This paper presents a multi-classifier system design controlled by the topology of the learning data. Our work also introduces a training algorithm for an incremental self-organizing map (SOM). This SOM is used to distribute classification tasks to a set of classifiers. Thus, the useful classifiers are activated when new data arrives. Comparative results are given for synthetic problems, for an image segmentation problem from the UCI repository and for a handwritten digit recognition problem
Complex-valued embeddings of generic proximity data
Proximities are at the heart of almost all machine learning methods. If the
input data are given as numerical vectors of equal lengths, euclidean distance,
or a Hilbertian inner product is frequently used in modeling algorithms. In a
more generic view, objects are compared by a (symmetric) similarity or
dissimilarity measure, which may not obey particular mathematical properties.
This renders many machine learning methods invalid, leading to convergence
problems and the loss of guarantees, like generalization bounds. In many cases,
the preferred dissimilarity measure is not metric, like the earth mover
distance, or the similarity measure may not be a simple inner product in a
Hilbert space but in its generalization a Krein space. If the input data are
non-vectorial, like text sequences, proximity-based learning is used or ngram
embedding techniques can be applied. Standard embeddings lead to the desired
fixed-length vector encoding, but are costly and have substantial limitations
in preserving the original data's full information. As an information
preserving alternative, we propose a complex-valued vector embedding of
proximity data. This allows suitable machine learning algorithms to use these
fixed-length, complex-valued vectors for further processing. The complex-valued
data can serve as an input to complex-valued machine learning algorithms. In
particular, we address supervised learning and use extensions of
prototype-based learning. The proposed approach is evaluated on a variety of
standard benchmarks and shows strong performance compared to traditional
techniques in processing non-metric or non-psd proximity data.Comment: Proximity learning, embedding, complex values, complex-valued
embedding, learning vector quantizatio
Complex-valued embeddings of generic proximity data
Proximities are at the heart of almost all machine learning methods. If the
input data are given as numerical vectors of equal lengths, euclidean distance,
or a Hilbertian inner product is frequently used in modeling algorithms. In a
more generic view, objects are compared by a (symmetric) similarity or
dissimilarity measure, which may not obey particular mathematical properties.
This renders many machine learning methods invalid, leading to convergence
problems and the loss of guarantees, like generalization bounds. In many cases,
the preferred dissimilarity measure is not metric, like the earth mover
distance, or the similarity measure may not be a simple inner product in a
Hilbert space but in its generalization a Krein space. If the input data are
non-vectorial, like text sequences, proximity-based learning is used or ngram
embedding techniques can be applied. Standard embeddings lead to the desired
fixed-length vector encoding, but are costly and have substantial limitations
in preserving the original data's full information. As an information
preserving alternative, we propose a complex-valued vector embedding of
proximity data. This allows suitable machine learning algorithms to use these
fixed-length, complex-valued vectors for further processing. The complex-valued
data can serve as an input to complex-valued machine learning algorithms. In
particular, we address supervised learning and use extensions of
prototype-based learning. The proposed approach is evaluated on a variety of
standard benchmarks and shows strong performance compared to traditional
techniques in processing non-metric or non-psd proximity data.Comment: Proximity learning, embedding, complex values, complex-valued
embedding, learning vector quantizatio
A SVM-based cursive character recognizer
Abstract This paper presents a cursive character recognizer, a crucial module in any cursive word recognition system based on a segmentation and recognition approach. The character classification is achieved by using support vector machines (SVMs) and a neural gas. The neural gas is used to verify whether lower and upper case version of a certain letter can be joined in a single class or not. Once this is done for every letter, the character recognition is performed by SVMs. A database of 57 293 characters was used to train and test the cursive character recognizer. SVMs compare notably better, in terms of recognition rates, with popular neural classifiers, such as learning vector quantization and multi-layer-perceptron. SVM recognition rate is among the highest presented in the literature for cursive character recognition
Robustness of Generalized Learning Vector Quantization Models against Adversarial Attacks
Adversarial attacks and the development of (deep) neural networks robust
against them are currently two widely researched topics. The robustness of
Learning Vector Quantization (LVQ) models against adversarial attacks has
however not yet been studied to the same extent. We therefore present an
extensive evaluation of three LVQ models: Generalized LVQ, Generalized Matrix
LVQ and Generalized Tangent LVQ. The evaluation suggests that both Generalized
LVQ and Generalized Tangent LVQ have a high base robustness, on par with the
current state-of-the-art in robust neural network methods. In contrast to this,
Generalized Matrix LVQ shows a high susceptibility to adversarial attacks,
scoring consistently behind all other models. Additionally, our numerical
evaluation indicates that increasing the number of prototypes per class
improves the robustness of the models.Comment: to be published in 13th International Workshop on Self-Organizing
Maps and Learning Vector Quantization, Clustering and Data Visualizatio
- …