28,637 research outputs found

    Noisy multi-label semi-supervised dimensionality reduction

    Get PDF
    Noisy labeled data represent a rich source of information that often are easily accessible and cheap to obtain, but label noise might also have many negative consequences if not accounted for. How to fully utilize noisy labels has been studied extensively within the framework of standard supervised machine learning over a period of several decades. However, very little research has been conducted on solving the challenge posed by noisy labels in non-standard settings. This includes situations where only a fraction of the samples are labeled (semi-supervised) and each high-dimensional sample is associated with multiple labels. In this work, we present a novel semi-supervised and multi-label dimensionality reduction method that effectively utilizes information from both noisy multi-labels and unlabeled data. With the proposed Noisy multi-label semi-supervised dimensionality reduction (NMLSDR) method, the noisy multi-labels are denoised and unlabeled data are labeled simultaneously via a specially designed label propagation algorithm. NMLSDR then learns a projection matrix for reducing the dimensionality by maximizing the dependence between the enlarged and denoised multi-label space and the features in the projected space. Extensive experiments on synthetic data, benchmark datasets, as well as a real-world case study, demonstrate the effectiveness of the proposed algorithm and show that it outperforms state-of-the-art multi-label feature extraction algorithms.Comment: 38 page

    Semi supervised weighted maximum variance dimensionality reduction

    Get PDF
    In the recent years, we have huge amounts of data which we want to classify with minimal human intervention. Only few features from the data that is available might be useful in some scenarios. In those scenarios, the dimensionality reduction methods play a major role for extracting useful features. The two parameter weighted maximum variance (2P-WMV) is a generalized dimensionality reduction method of which principal component analysis (PCA) and maximum margin criterion (MMC) are special cases.. In this paper, we have extended the 2P-WMV approach from our previous work to a semi-supervised version. The objective of this work is specially to show how two parameter version of Weighted Maximum Variance (2P-WMV) performs in Semi-Supervised environment in comparison to the supervised learning. By making use of both labeled and unlabeled data, we present our method with experimental results on several datasets using various approaches

    Semi-Supervised Discriminant Analysis Using Robust Path-Based Similarity

    Get PDF
    Linear Discriminant Analysis (LDA), which works by maximizing the within-class similarity and minimizing the between-class similarity simultaneously, is a popular dimensionality reduction technique in pattern recognition and machine learning. In real-world applications when labeled data are limited, LDA does not work well. Under many situations, however, it is easy to obtain unlabeled data in large quantities. In this paper, we propose a novel dimensionality reduction method, called Semi-Supervised Discriminant Analysis (SSDA), which can utilize both labeled and unlabeled data to perform dimensionality reduction in the semisupervised setting. Our method uses a robust path-based similarity measure to capture the manifold structure of the data and then uses the obtained similarity to maximize the separability between different classes. A kernel extension of the proposed method for nonlinear dimensionality reduction in the semi-supervised setting is also presented. Experiments on face recognition demonstrate the effectiveness of the proposed method. 1
    • …
    corecore