14 research outputs found

    Supervised Feature Space Reduction for Multi-Label Nearest Neighbors

    Get PDF
    International audienceWith the ability to process many real-world problems, multi-label classification has received a large attention in recent years and the instance-based ML-kNN classifier is today considered as one of the most efficient. But it is sensitive to noisy and redundant features and its performances decrease with increasing data dimensionality. To overcome these problems, dimensionality reduction is an alternative but current methods optimize reduction objectives which ignore the impact on the ML-kNN classification. We here propose ML-ARP, a novel dimensionality reduction algorithm which, using a variable neighborhood search meta-heuristic, learns a linear projection of the feature space which specifically optimizes the ML-kNN classification loss. Numerical comparisons have confirmed that ML-ARP outperforms ML-kNN without data processing and four standard multi-label dimensionality reduction algorithms

    Two-Stage Fuzzy Multiple Kernel Learning Based on Hilbert-Schmidt Independence Criterion

    Full text link
    © 1993-2012 IEEE. Multiple kernel learning (MKL) is a principled approach to kernel combination and selection for a variety of learning tasks, such as classification, clustering, and dimensionality reduction. In this paper, we develop a novel fuzzy multiple kernel learning model based on the Hilbert-Schmidt independence criterion (HSIC) for classification, which we call HSIC-FMKL. In this model, we first propose an HSIC Lasso-based MKL formulation, which not only has a clear statistical interpretation that minimum redundant kernels with maximum dependence on output labels are found and combined, but also enables the global optimal solution to be computed efficiently by solving a Lasso optimization problem. Since the traditional support vector machine (SVM) is sensitive to outliers or noises in the dataset, fuzzy SVM (FSVM) is used to select the prediction hypothesis once the optimal kernel has been obtained. The main advantage of FSVM is that we can associate a fuzzy membership with each data point such that these data points can have different effects on the training of the learning machine. We propose a new fuzzy membership function using a heuristic strategy based on the HSIC. The proposed HSIC-FMKL is a two-stage kernel learning approach and the HSIC is applied in both stages. We perform extensive experiments on real-world datasets from the UCI benchmark repository and the application domain of computational biology which validate the superiority of the proposed model in terms of prediction accuracy

    多标签学习中基于互信息的快速特征选择方法

    Get PDF
    针对传统的基于启发式搜索的多标记特征选择算法时间复杂度高的问题,提出一种简单快速的多标记特征选择(EF-MLFS)方法。首先使用互信息(MI)衡量每个维度的特征与每一维标记之间的相关性,然后将所得相关性相加并排序,最后按照总的相关性大小进行特征选择。将所提方法与六种现有的比较有代表性的多标记特征选择方法作对比,如最大依赖性最小冗余性(MDMR)算法和基于朴素贝叶斯的多标记特征选择(MLNB)方法等。实验结果表明,EF-MLFS方法进行特征选择并分类的结果在平均准确率、覆盖率、海明损失等常见的多标记分类评价指标上均达最优;该方法无需进行全局搜索,因此时间复杂度相较于MDMR、对偶多标记应用(PMU)等方法也有明显降低。贵州省科学技术厅基金资助项目(黔科合J字[2011]2215号)~

    多标签学习中基于互信息的快速特征选择方法

    Get PDF
    本文针对传统的基于启发式搜索的多标记特征选择算法时间复杂度高的问题,提出一种简单、快速、有效的多标记特征选择方法(Easy and Fast Multi-Label Feature Selection, EF-MLFS)。该方法首先使用互信息衡量每个维度的特征与每一维标记之间的相关性,然后将所得相关性相加并排序,最后按照总的相关性大小进行特征选择。本文与六种现有的比较有代表性的多标记特征选择方法,例如最大依赖性最小冗余性(Max-Dependency and Min-Redundancy, MDMR)算法,基于朴素贝叶斯的多标记特征选择方法(Multi-Label Naive Bayes, MLNB)等。实验结果表明,使用本文方法进行特征选择并分类后的实验结果在平均准确率(Average Precision)、覆盖率(Coverage)、海明损失(Hamming Loss)等常见的多标记分类评价指标上均可达最优,同时该方法无需进行全局搜索,因此时间复杂度相较于MDMR、PMU等方法也有明显降低,可以极大的降低时间复杂度

    Noisy multi-label semi-supervised dimensionality reduction

    Get PDF
    Noisy labeled data represent a rich source of information that often are easily accessible and cheap to obtain, but label noise might also have many negative consequences if not accounted for. How to fully utilize noisy labels has been studied extensively within the framework of standard supervised machine learning over a period of several decades. However, very little research has been conducted on solving the challenge posed by noisy labels in non-standard settings. This includes situations where only a fraction of the samples are labeled (semi-supervised) and each high-dimensional sample is associated with multiple labels. In this work, we present a novel semi-supervised and multi-label dimensionality reduction method that effectively utilizes information from both noisy multi-labels and unlabeled data. With the proposed Noisy multi-label semi-supervised dimensionality reduction (NMLSDR) method, the noisy multi-labels are denoised and unlabeled data are labeled simultaneously via a specially designed label propagation algorithm. NMLSDR then learns a projection matrix for reducing the dimensionality by maximizing the dependence between the enlarged and denoised multi-label space and the features in the projected space. Extensive experiments on synthetic data, benchmark datasets, as well as a real-world case study, demonstrate the effectiveness of the proposed algorithm and show that it outperforms state-of-the-art multi-label feature extraction algorithms.Comment: 38 page
    corecore