38 research outputs found
Average classification results with 4 classifiers on 10 datasets.
Average classification results with 4 classifiers on 10 datasets.</p
Overall framework diagram of the algorithm.
Feature selection has long been a focal point of research in various fields.Recent studies have focused on the application of random multi-subspaces methods to extract more information from raw samples.However,this approach inadequately addresses the adverse effects that may arise due to feature collinearity in high-dimensional datasets.To further address the limited ability of traditional algorithms to extract useful information from raw samples while considering the challenge of feature collinearity during the random subspaces learning process, we employ a clustering approach based on correlation measures to group features.Subsequently, we construct subspaces with lower inter-feature correlations.When integrating feature weights obtained from all feature spaces,we introduce a weighting factor to better handle the contributions from different feature spaces.We comprehensively evaluate our proposed algorithm on ten real datasets and four synthetic datasets,comparing it with six other feature selection algorithms.Experimental results demonstrate that our algorithm,denoted as KNCFS,effectively identifies relevant features,exhibiting robust feature selection performance,particularly suited for addressing feature selection challenges in practice.</div
Classification results with SVM on 10 datasets.
Feature selection has long been a focal point of research in various fields.Recent studies have focused on the application of random multi-subspaces methods to extract more information from raw samples.However,this approach inadequately addresses the adverse effects that may arise due to feature collinearity in high-dimensional datasets.To further address the limited ability of traditional algorithms to extract useful information from raw samples while considering the challenge of feature collinearity during the random subspaces learning process, we employ a clustering approach based on correlation measures to group features.Subsequently, we construct subspaces with lower inter-feature correlations.When integrating feature weights obtained from all feature spaces,we introduce a weighting factor to better handle the contributions from different feature spaces.We comprehensively evaluate our proposed algorithm on ten real datasets and four synthetic datasets,comparing it with six other feature selection algorithms.Experimental results demonstrate that our algorithm,denoted as KNCFS,effectively identifies relevant features,exhibiting robust feature selection performance,particularly suited for addressing feature selection challenges in practice.</div
ACC of 10-fold cross validation on 10 datasets (MeanĀ±std).
ACC of 10-fold cross validation on 10 datasets (MeanĀ±std).</p
Success rate of feature selection on synthetic datasets.
Success rate of feature selection on synthetic datasets.</p
ACC of 10-fold cross validation on 10 datasets (MeanĀ±std).
ACC of 10-fold cross validation on 10 datasets (MeanĀ±std).</p
Convergence curves of K-means.
Feature selection has long been a focal point of research in various fields.Recent studies have focused on the application of random multi-subspaces methods to extract more information from raw samples.However,this approach inadequately addresses the adverse effects that may arise due to feature collinearity in high-dimensional datasets.To further address the limited ability of traditional algorithms to extract useful information from raw samples while considering the challenge of feature collinearity during the random subspaces learning process, we employ a clustering approach based on correlation measures to group features.Subsequently, we construct subspaces with lower inter-feature correlations.When integrating feature weights obtained from all feature spaces,we introduce a weighting factor to better handle the contributions from different feature spaces.We comprehensively evaluate our proposed algorithm on ten real datasets and four synthetic datasets,comparing it with six other feature selection algorithms.Experimental results demonstrate that our algorithm,denoted as KNCFS,effectively identifies relevant features,exhibiting robust feature selection performance,particularly suited for addressing feature selection challenges in practice.</div
The acc with the parameters <i>s</i>,<i>M</i>,<i>k</i> in leukaemia dataset on KNN and svm classification.
The acc with the parameters s,M,k in leukaemia dataset on KNN and svm classification.</p
<i>F</i><sub>1</sub>-score of 10-fold cross validation on 10 datasets (MeanĀ±std).
F1-score of 10-fold cross validation on 10 datasets (MeanĀ±std).</p
The acc with the parameters <i>s</i>,<i>M</i>,<i>k</i> in GLIOMA dataset on KNN and svm classification.
The acc with the parameters s,M,k in GLIOMA dataset on KNN and svm classification.</p