64 research outputs found

    Extending twin support vector machine classifier for multi-category classification problems

    Get PDF
    © 2013 – IOS Press and the authors. All rights reservedTwin support vector machine classifier (TWSVM) was proposed by Jayadeva et al., which was used for binary classification problems. TWSVM not only overcomes the difficulties in handling the problem of exemplar unbalance in binary classification problems, but also it is four times faster in training a classifier than classical support vector machines. This paper proposes one-versus-all twin support vector machine classifiers (OVA-TWSVM) for multi-category classification problems by utilizing the strengths of TWSVM. OVA-TWSVM extends TWSVM to solve k-category classification problems by developing k TWSVM where in the ith TWSVM, we only solve the Quadratic Programming Problems (QPPs) for the ith class, and get the ith nonparallel hyperplane corresponding to the ith class data. OVA-TWSVM uses the well known one-versus-all (OVA) approach to construct a corresponding twin support vector machine classifier. We analyze the efficiency of the OVA-TWSVM theoretically, and perform experiments to test its efficiency on both synthetic data sets and several benchmark data sets from the UCI machine learning repository. Both the theoretical analysis and experimental results demonstrate that OVA-TWSVM can outperform the traditional OVA-SVMs classifier. Further experimental comparisons with other multiclass classifiers demonstrated that comparable performance could be achieved.This work is supported in part by the grant of the Fundamental Research Funds for the Central Universities of GK201102007 in PR China, and is also supported by Natural Science Basis Research Plan in Shaanxi Province of China (Program No.2010JM3004), and is at the same time supported by Chinese Academy of Sciences under the Innovative Group Overseas Partnership Grant as well as Natural Science Foundation of China Major International Joint Research Project (NO.71110107026)

    Nonparallel support vector machines for pattern classification

    Get PDF
    We propose a novel nonparallel classifier, called nonparallel support vector machine (NPSVM), for binary classification. Our NPSVM that is fully different from the existing nonparallel classifiers, such as the generalized eigenvalue proximal support vector machine (GEPSVM) and the twin support vector machine (TWSVM), has several incomparable advantages: 1) two primal problems are constructed implementing the structural risk minimization principle; 2) the dual problems of these two primal problems have the same advantages as that of the standard SVMs, so that the kernel trick can be applied directly, while existing TWSVMs have to construct another two primal problems for nonlinear cases based on the approximate kernel-generated surfaces, furthermore, their nonlinear problems cannot degenerate to the linear case even the linear kernel is used; 3) the dual problems have the same elegant formulation with that of standard SVMs and can certainly be solved efficiently by sequential minimization optimization algorithm, while existing GEPSVM or TWSVMs are not suitable for large scale problems; 4) it has the inherent sparseness as standard SVMs; 5) existing TWSVMs are only the special cases of the NPSVM when the parameters of which are appropriately chosen. Experimental results on lots of datasets show the effectiveness of our method in both sparseness and classification accuracy, and therefore, confirm the above conclusion further. In some sense, our NPSVM is a new starting point of nonparallel classifiers

    Study on support vector machine as a classifier

    Get PDF
    SVM [1], [2] is a learning method which learns by considering data points to be in space. We studied different types of Support Vector Machine (SVM). We also observed their classification process. We conducted10-fold testing experiments on LSSVM [7], [8] (Least square Support Vector Machine) and PSVM [9] (Proximal Support Vector Machine) using standard sets of data. Finally we proposed a new algorithm NPSVM (Non-Parallel Support Vector Machine) which is reformulated from NPPC [12], [13] (Non-Parallel Plane Classifier). We have observed that the cost function of NPPC is affected by the additional constraint for Euclidean distance classification. So we implicitly normalized the weight vectors instead of the additional constraint. As a result we could generate a very good cost function. The computational complexity of NPSVM for both linear and non-linear kernel is evaluated. The results of 10-fold test using standard data sets of NPSVM are compared with the LSSVM and PSVM

    A mathematical programming approach to SVM-based classification with label noise

    Get PDF
    The authors of this research acknowledge financial support by the Spanish Ministerio de Ciencia y Tecnologia, Agencia Estatal de Investigacion and Fondos Europeos de Desarrollo Regional (FEDER) via project PID2020114594GB-C21. The authors also acknowledge partial support from projects FEDER-US-1256951, Junta de Andalucía P18-FR-1422, CEI-3-FQM331, NetmeetData: Ayudas Fundación BBVA a equipos de investigación científica 2019. The first author was also supported by projects P18-FR-2369 (Junta de Andalucía) and IMAG-Maria de Maeztu grant CEX2020-001105-M /AEI /10.13039/501100011033. (Spanish Ministerio de Ciencia y Tecnologia).In this paper we propose novel methodologies to optimally construct Support Vector Machine-based classifiers that take into account that label noise occur in the training sample. We propose different alternatives based on solving Mixed Integer Linear and Non Linear models by incorporating decisions on relabeling some of the observations in the training dataset. The first method incorporates relabeling directly in the SVM model while a second family of methods combines clustering with classification at the same time, giving rise to a model that applies simultaneously similarity measures and SVM. Extensive computational experiments are reported based on a battery of standard datasets taken from UCI Machine Learning repository, showing the effectiveness of the proposed approaches.Spanish Ministerio de Ciencia y Tecnologia, Agencia Estatal de Investigacion and Fondos Europeos de Desarrollo Regional (FEDER) via project PID2020114594GB-C21FEDER-US-1256951Junta de Andalucía P18-FR-1422CEI-3-FQM331NetmeetData: Ayudas Fundación BBVA a equipos de investigación científica 2019Project P18-FR-2369 Junta de AndalucíaIMAG-Maria de Maeztu grant CEX2020-001105-M /AEI /10.13039/501100011033. (Spanish Ministerio de Ciencia y Tecnologia

    A New Approach for Clustered MCs Classification with Sparse Features Learning and TWSVM

    Get PDF
    In digital mammograms, an early sign of breast cancer is the existence of microcalcification clusters (MCs), which is very important to the early breast cancer detection. In this paper, a new approach is proposed to classify and detect MCs. We formulate this classification problem as sparse feature learning based classification on behalf of the test samples with a set of training samples, which are also known as a “vocabulary” of visual parts. A visual information-rich vocabulary of training samples is manually built up from a set of samples, which include MCs parts and no-MCs parts. With the prior ground truth of MCs in mammograms, the sparse feature learning is acquired by the lP-regularized least square approach with the interior-point method. Then we designed the sparse feature learning based MCs classification algorithm using twin support vector machines (TWSVMs). To investigate its performance, the proposed method is applied to DDSM datasets and compared with support vector machines (SVMs) with the same dataset. Experiments have shown that performance of the proposed method is more efficient or better than the state-of-art methods

    Intelligent classification algorithms in enhancing the performance of support vector machine

    Get PDF
    Performing feature subset and tuning support vector machine (SVM) parameter processes in parallel with the aim to increase the classification accuracy is the current research direction in SVM. Common methods associated in tuning SVM parameters will discretize the continuous value of these parameters which will result in low classification performance. This paper presents two intelligent algorithms that hybridized between ant colony optimization (ACO) and SVM for tuning SVM parameters and selecting feature subset without having to discretize the continuous values. This can be achieved by simultaneously executing the selection of feature subset and tuning SVM parameters simultaneously. The algorithms are called ACOMVSVM and IACOMV-SVM. The difference between the algorithms is the size of the solution archive. The size of the archive in ACOMV is fixed while in IACOMV, the size of solution archive increases as the optimization procedure progress. Eight benchmark datasets from UCI were used in the experiments to validate the performance of the proposed algorithms. Experimental results obtained from the proposed algorithms are better when compared with other approaches in terms of classification accuracy. The average classification accuracies for the proposed ACOMV–SVM and IACOMV-SVM algorithms are 97.28 and 97.91 respectively. The work in this paper also contributes to a new direction for ACO that can deal with mixed variable ACO

    Majorization-Minimization for sparse SVMs

    Full text link
    Several decades ago, Support Vector Machines (SVMs) were introduced for performing binary classification tasks, under a supervised framework. Nowadays, they often outperform other supervised methods and remain one of the most popular approaches in the machine learning arena. In this work, we investigate the training of SVMs through a smooth sparse-promoting-regularized squared hinge loss minimization. This choice paves the way to the application of quick training methods built on majorization-minimization approaches, benefiting from the Lipschitz differentiabililty of the loss function. Moreover, the proposed approach allows us to handle sparsity-preserving regularizers promoting the selection of the most significant features, so enhancing the performance. Numerical tests and comparisons conducted on three different datasets demonstrate the good performance of the proposed methodology in terms of qualitative metrics (accuracy, precision, recall, and F 1 score) as well as computational cost
    corecore