1 research outputs found

    Semi-Supervised learning with Collaborative Bagged Multi-label K-Nearest-Neighbors

    No full text
    Over the last few years, Multi-label classification has received significant attention from researchers to solve many issues in many fields. The manual annotation of available datasets is time-consuming and need a huge effort from the expert, especially for Multi-label applications in which each example of learning is associated with many labels at once. To overcome the manual annotation drawback, and to take advantages from the large amounts of unlabeled data, many semi-supervised approaches were proposed in the literature to give more sophisticated and fast solutions to support the automatic labeling of the unlabeled data. In this paper, a Collaborative Bagged Multi-label K-Nearest-Neighbors (CobMLKNN) algorithm is proposed, that extend the co-Training paradigm by a Multi-label K-Nearest-Neighbors algorithm. Experiments on ten real-world Multi-label datasets show the effectiveness of CobMLKNN algorithm to improve the performance of MLKNN to learn from a small number of labeled samples by exploiting unlabeled samples
    corecore