53,747 research outputs found

    Data selection based on decision tree for SVM classification on large data sets

    Get PDF
    Support Vector Machine (SVM) has important properties such as a strong mathematical background and a better generalization capability with respect to other classification methods. On the other hand, the major drawback of SVM occurs in its training phase, which is computationally expensive and highly dependent on the size of input data set. In this study, a new algorithm to speed up the training time of SVM is presented; this method selects a small and representative amount of data from data sets to improve training time of SVM. The novel method uses an induction tree to reduce the training data set for SVM, producing a very fast and high-accuracy algorithm. According to the results, the proposed algorithm produces results with similar accuracy and in a faster way than the current SVM implementations.Proyecto UAEM 3771/2014/C

    Sequential support vector classifiers and regression

    Get PDF
    Support Vector Machines (SVMs) map the input training data into a high dimensional feature space and finds a maximal margin hyperplane separating the data in that feature space. Extensions of this approach account for non-separable or noisy training data (soft classifiers) as well as support vector based regression. The optimal hyperplane is usually found by solving a quadratic programming problem which is usually quite complex, time consuming and prone to numerical instabilities. In this work, we introduce a sequential gradient ascent based algorithm for fast and simple implementation of the SVM for classification with soft classifiers. The fundamental idea is similar to applying the Adatron algorithm to SVM as developed independently in the Kernel-Adatron [7], although the details are different in many respects. We modify the formulation of the bias and consider a modified dual optimization problem. This formulation has made it possible to extend the framework for solving the SVM regression in an online setting. This paper looks at theoretical justifications of the algorithm, which is shown to converge robustly to the optimal solution very fast in terms of number of iterations, is orders of magnitude faster than conventional SVM solutions and is extremely simple to implement even for large sized problems. Experimental evaluations on benchmark classification problems of sonar data and USPS and MNIST databases substantiate the speed and robustness of the learning procedure

    Learning to Select Pre-Trained Deep Representations with Bayesian Evidence Framework

    Full text link
    We propose a Bayesian evidence framework to facilitate transfer learning from pre-trained deep convolutional neural networks (CNNs). Our framework is formulated on top of a least squares SVM (LS-SVM) classifier, which is simple and fast in both training and testing, and achieves competitive performance in practice. The regularization parameters in LS-SVM is estimated automatically without grid search and cross-validation by maximizing evidence, which is a useful measure to select the best performing CNN out of multiple candidates for transfer learning; the evidence is optimized efficiently by employing Aitken's delta-squared process, which accelerates convergence of fixed point update. The proposed Bayesian evidence framework also provides a good solution to identify the best ensemble of heterogeneous CNNs through a greedy algorithm. Our Bayesian evidence framework for transfer learning is tested on 12 visual recognition datasets and illustrates the state-of-the-art performance consistently in terms of prediction accuracy and modeling efficiency.Comment: Appearing in CVPR-2016 (oral presentation

    Rolling bearing fault diagnosis by a novel fruit fly optimization algorithm optimized support vector machine

    Get PDF
    Based on the nonlinear and non-stationary characteristics of rotating machinery vibration, a FOA-SVM model is established by Fruit Fly Optimization Algorithm (FOA) and combining the Support Vector Machine (SVM) to realize the optimization of the SVM parameters. The mechanism of this model is imitating the foraging behavior of fruit flies. The smell concentration judgment value of the forage is used as the parameter to construct a proper fitness function in order to search the optimal SVM parameters. The FOA algorithm is proved to be convergence fast and accurately with global searching ability by optimizing the analog signal of rotating machinery fault. In order to improve the classification accuracy rate, built FOA-SVM model, and then to extract feature value for training and testing, so that it can recognize the fault rolling bearing and the degree of it. Analyze and diagnose actual signals, it prove the validity of the method, and the improved method had a good prospect for its application in rolling bearing diagnosis

    An Improved Way to Make Large-Scale SVR Learning Practical

    Get PDF
    <p/> <p>We first put forward a new algorithm of reduced support vector regression (RSVR) and adopt a new approach to make a similar mathematical form as that of support vector classification. Then we describe a fast training algorithm for simplified support vector regression, sequential minimal optimization (SMO) which was used to train SVM before. Experiments prove that this new method converges considerably faster than other methods that require the presence of a substantial amount of the data in memory.</p

    An accelerated MDM algorithm for SVM training

    Full text link
    This is an electronic version of the paper presented at the 16th European Symposium on Artificial Neural Networks, held in Bruges on 2018In this work we will propose an acceleration procedure for the Mitchell–Demyanov–Malozemov (MDM) algorithm (a fast geometric algorithm for SVM construction) that may yield quite large training savings. While decomposition algorithms such as SVMLight or SMO are usually the SVM methods of choice, we shall show that there is a relationship between SMO and MDM that suggests that, at least in their simplest implementations, they should have similar training speeds. Thus, and although we will not discuss it here, the proposed MDM acceleration might be used as a starting point to new ways of accelerating SMO.With partial support of Spain’s TIN 2004–07676 and TIN 2007–66862 projects. The first author is kindly supported by FPU-MEC grant reference AP2006-02285
    corecore