1 research outputs found

    Improving Performance of Classifiers using Rotational Feature Selection Scheme

    Get PDF
    The crucial points in machine learning research are that how to develop new classification methods with strong mathematic background and/or to improve the performance of existing methods. Over the past few decades, researches have been working on these issues. Here, we emphasis the second point by improving the performance of well-known supervised classifiers like Naive Bayesian, Decision Tree and k-Nearest Neighbor. For this purpose, recently developed rotational feature selection scheme is used before performing the classification task. It splits the training data set into different number of rotational non-overlapping subsets. Subsequently, principal component analysis is used for each subset and all the principal components are retained to create an informative set that preserve the diversity of the original training data. Thereafter, such informative set is used to train and test the classifiers. Finally, posterior probability is computed to get the classification results. The effectiveness of the rotational feature selection integrated classifiers is demonstrated quantitatively by comparing with aforementioned classifiers for 10 real-life data sets. Finally, statistical test has been conducted to show the superiority of the results
    corecore