45,403 research outputs found

    Mixed variable ant colony optimization technique for feature subset selection and model selection

    Get PDF
    This paper presents the integration of Mixed Variable Ant Colony Optimization and Support Vector Machine (SVM) to enhance the performance of SVM through simultaneously tuning its parameters and selecting a small number of features.The process of selecting a suitable feature subset and optimizing SVM parameters must occur simultaneously,because these processes affect each ot her which in turn will affect the SVM performance.Thus producing unacceptable classification accuracy.Five datasets from UCI were used to evaluate the proposed algorithm.Results showed that the proposed algorithm can enhance the classification accuracy with the small size of features subset

    Performance Analysis of Feature Selection Techniques for Support Vector Machine and its Application for Lung Nodule Detection

    Get PDF
    Lung cancer typically exhibits its presence with the formation of pulmonary nodules. Computer Aided Detection (CAD) of such nodules in CT scans would be of valuable help in lung cancer screening. Typical CAD system is comprised of a candidate detector and a feature-based classifier. In this research, we study and explore the performance of Support Vector Machine (SVM) based on a large set of features. We study the performance of SVM as a function of the number of features. Our results indicate that SVM is more robust and computationally faster with a large set of features and less prone to over-Training when compared to traditional classifiers. In addition, we also present a computationally efficient approach for selecting features for SVM. Results are presented for a publicly available Lung Nodule Analysis 2016 dataset. Our results based on 10-fold validation indicate that SVM based classification method outperforms the fisher linear discriminant classifier by 14.8%

    Face Recognition using R-KDA with Non-Linear SVM for Multi-View Database

    Get PDF
    AbstractThis paper develops a new Face Recognition System which combines R-KDA for selecting optimal discriminant features with non-linear SVM for Recognition. Experiment results have been conducted showing the comparison of enhanced efficiency of our proposed system over R-KDA with k-nn as the similarity distance measure

    APPLYING HUNGER GAME SEARCH (HGS) FOR SELECTING SIGNIFICANT BLOOD INDICATORS FOR EARLY PREDICTION OF ICU COVID-19 SEVERITY

    Get PDF
    Millions of people around the world have been affected and some have died during the global pandemic Corona (COVID-19). This pandemic has created a global threat to people's lives and medical systems. The constraints of hospital resources and the pressures on healthcare workers during this period are among the reasons for wrong decisions and medical deterioration. Anticipating severe patients is an urgent matter of resource consumption by prioritizing patients at high risk to save their lives. This paper introduces an early prognostic model to predict the severity of patients and detect the most significant features based on clinical blood data. The proposed model predicts ICU severity within the first 2 hours of hospital admission, seeks to assist clinicians in decision-making and facilitates efficient use of hospital resources. The Hunger Game Search (HGS) meta-heuristic algorithm and the SVM are hybridized for building the proposed prediction model. Furthermore, they have been used for selecting the most informative features from the blood test data. Experiments have shown that using HGS for selecting the features with the SVM classifier achieved excellent results compared with the other four meta-heuristic algorithms. The model using the features selected by the HGS algorithm accomplished the topmost results, 98.6% and 96.5% for the best and mean accuracy, respectively, compared with using all features and features selected by other popular optimization algorithms

    Dissimilarity-based Ensembles for Multiple Instance Learning

    Get PDF
    In multiple instance learning, objects are sets (bags) of feature vectors (instances) rather than individual feature vectors. In this paper we address the problem of how these bags can best be represented. Two standard approaches are to use (dis)similarities between bags and prototype bags, or between bags and prototype instances. The first approach results in a relatively low-dimensional representation determined by the number of training bags, while the second approach results in a relatively high-dimensional representation, determined by the total number of instances in the training set. In this paper a third, intermediate approach is proposed, which links the two approaches and combines their strengths. Our classifier is inspired by a random subspace ensemble, and considers subspaces of the dissimilarity space, defined by subsets of instances, as prototypes. We provide guidelines for using such an ensemble, and show state-of-the-art performances on a range of multiple instance learning problems.Comment: Submitted to IEEE Transactions on Neural Networks and Learning Systems, Special Issue on Learning in Non-(geo)metric Space

    Two-stage hybrid feature selection algorithms for diagnosing erythemato-squamous diseases

    Get PDF
    This paper proposes two-stage hybrid feature selection algorithms to build the stable and efficient diagnostic models where a new accuracy measure is introduced to assess the models. The two-stage hybrid algorithms adopt Support Vector Machines (SVM) as a classification tool, and the extended Sequential Forward Search (SFS), Sequential Forward Floating Search (SFFS), and Sequential Backward Floating Search (SBFS), respectively, as search strategies, and the generalized F-score (GF) to evaluate the importance of each feature. The new accuracy measure is used as the criterion to evaluated the performance of a temporary SVM to direct the feature selection algorithms. These hybrid methods combine the advantages of filters and wrappers to select the optimal feature subset from the original feature set to build the stable and efficient classifiers. To get the stable, statistical and optimal classifiers, we conduct 10-fold cross validation experiments in the first stage; then we merge the 10 selected feature subsets of the 10-cross validation experiments, respectively, as the new full feature set to do feature selection in the second stage for each algorithm. We repeat the each hybrid feature selection algorithm in the second stage on the one fold that has got the best result in the first stage. Experimental results show that our proposed two-stage hybrid feature selection algorithms can construct efficient diagnostic models which have got better accuracy than that built by the corresponding hybrid feature selection algorithms without the second stage feature selection procedures. Furthermore our methods have got better classification accuracy when compared with the available algorithms for diagnosing erythemato-squamous diseases
    • …
    corecore