31 research outputs found

    Efficient Kernelization of Discriminative Dimensionality Reduction

    Get PDF
    Schulz A, Brinkrolf J, Hammer B. Efficient Kernelization of Discriminative Dimensionality Reduction. Neurocomputing. 2017;268(SI):34-41.Modern nonlinear dimensionality reduction (DR) techniques project high dimensional data to low dimensions for their visual inspection. Provided the intrinsic data dimensionality is larger than two, DR nec- essarily faces information loss and the problem becomes ill-posed. Dis- criminative dimensionality reduction (DiDi) offers one intuitive way to reduce this ambiguity: it allows a practitioner to identify what is relevant and what should be regarded as noise by means of intuitive auxiliary information such as class labels. One powerful DiDi method relies on a change of the data metric based on the Fisher information. This technique has been presented for vectorial data so far. The aim of this contribution is to extend the technique to more general data structures which are characterised in terms of pairwise similarities only by means of a kernelisation. We demonstrate that a computation of the Fisher metric is possible in kernel space, and that it can efficiently be integrated into modern DR technologies such as t-SNE or faster Barnes-Hut-SNE. We demonstrate the performance of the approach in a variety of benchmarks

    Differential privacy for learning vector quantization

    Get PDF
    Brinkrolf J, Göpfert C, Hammer B. Differential privacy for learning vector quantization. Neurocomputing. 2019;342:125-136

    Learning Vector Quantization for the Real-World: Privacy, Robustness, and Sparsity

    No full text
    Brinkrolf J. Learning Vector Quantization for the Real-World: Privacy, Robustness, and Sparsity. Bielefeld: Universität Bielefeld; 2023.Machine Learning (ML) methods are increasingly used and outperform humans in many specified and well-defined tasks. Considerable research focuses on optimizing the performance of such methodologies. However, the nature of application areas poses further challenges. For example, in critical domains, a false model behavior poses the risk of fatal mistakes. This is particularly relevant in traffic or medicine. In the latter, the data frequently contains sensitive information which should be preserved. Further, much data are recorded on distributed devices with limited computational power, like smartphones and peripheral devices. Hence, models of low complexity are required. As due to technical, legal, or strategic constraints, data transfer is limited employing intelligent mechanisms is crucial. This requires the consideration of further aspects beyond mere accuracy, namely privacy, robustness, efficiency, and distribution of the data itself. In this thesis, I address these additional aspects, namely privacy, robustness, efficiency, and distribution of the data for prototype-based classifiers. In particular, I focus on Generalized Learning Vector Quantization (GLVQ) models and their variation to metric adaptations. I show that the original GLVQ model bears the risk of revealing private information of samples present during the training. I propose three versions of training schemes provably obeying privacy. Further, I propose a novel reject option scheme for GLVQ models. Thereby increasing the robustness of the model is achieved. To reduce the complexity of a model and obtain a sparse representation of feature vectors, I apply regularization to the GLVQ scheme. Finally, I propose a methodology fusing model parameters of several models trained on distributed data sets

    Interpretable Machine Learning with Reject Option

    No full text
    Brinkrolf J, Hammer B. Interpretable Machine Learning with Reject Option. at - Automatisierungstechnik. 2018;66(4):283-290

    Time integration and reject options for probabilistic output of pairwise LVQ

    No full text
    Brinkrolf J, Hammer B. Time integration and reject options for probabilistic output of pairwise LVQ. Neural Computing and Applications. 2019

    Probabilistic extension and reject options for pairwise LVQ

    No full text
    Brinkrolf J, Hammer B. Probabilistic extension and reject options for pairwise LVQ. In: 2017 12th International Workshop on Self-Organizing Maps and Learning Vector Quantization, Clustering and Data Visualization (WSOM). Piscataway, NJ: IEEE; 2017

    Sparse Metric Learning in Prototype-based Classification

    No full text
    Brinkrolf J, Hammer B. Sparse Metric Learning in Prototype-based Classification. In: Verleysen M, ed. Proceedings of the ESANN, 28th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. 2020: 375-380

    Robust Feature Selection and Robust Training to Cope with Hyperspectral Sensor Shifts

    No full text
    Vaquet V, Brinkrolf J, Hammer B. Robust Feature Selection and Robust Training to Cope with Hyperspectral Sensor Shifts

    Differential Privacy for Learning Vector Quantization

    No full text
    Brinkrolf J, Berger K, Hammer B. Differential Privacy for Learning Vector Quantization. In: New Challenges in Neural Computation. 2017
    corecore