2 research outputs found

    Forschungsbericht / Hochschule Mittweida

    Get PDF

    High Dimensional Matrix Relevance Learning

    No full text
    Schleif F-M, Villmann T, Zhu X. High Dimensional Matrix Relevance Learning. In: 2014 IEEE International Conference on Data Mining Workshop. Piscataway, NJ: IEEE; 2015.In supervised learning the parameters of a parametric Euclidean distance or mahalanobis distance can be effectively learned by so called Matrix Relevance Learning. This adaptation is not only useful to improve the discrimination capabilities of the model, but also to identify relevant features or relevant correlated features in the input data. Classical Matrix Relevance Learning scales quadratic with the number of input dimensions M and becomes prohibitive if M exceeds some thousand input features. We address Matrix Relevance Learning for data with a very large number of input dimensions. Such high dimensional data occur frequently in the life sciences domain e.g. For microarray or spectral data. We derive two respective approximation schemes and show exemplarily the implementation in Generalized Matrix Relevance Learning (GMLVQ) for classification problems. The first approximation scheme is based on Limited Rank Matrix Approximation (LiRaM) LiRaM is a random subspace projection technique which was formerly mainly considered for visualization purposes. The second novel approximation scheme is based on the Nystroem approximation and is exact if the number of Eigen values equals the rank of the Relevance Matrix. Using multiple benchmark problems, we demonstrate that the training process yields fast low rank approximations of the relevance matrices without harming the generalization ability. The approaches can be used to identify discriminative features for high dimensional data sets
    corecore