10,905 research outputs found

    Piecewise-Smooth Support Vector Machine for Classification

    Get PDF
    Support vector machine (SVM) has been applied very successfully in a variety of classification systems. We attempt to solve the primal programming problems of SVM by converting them into smooth unconstrained minimization problems. In this paper, a new twice continuously differentiable piecewise-smooth function is proposed to approximate the plus function, and it issues a piecewise-smooth support vector machine (PWSSVM). The novel method can efficiently handle large-scale and high dimensional problems. The theoretical analysis demonstrates its advantages in efficiency and precision over other smooth functions. PWSSVM is solved using the fast Newton-Armijo algorithm. Experimental results are given to show the training speed and classification performance of our approach

    Towards many-class classification of materials based on their spectral fingerprints

    Get PDF
    Hyperspectral sensors are becoming cheaper and more available to the public. It is reasonable to assume that in the near future they will become more and more ubiquitous. This gives rise to many interesting applications, for example identification of pharmaceutical products and classification of food stuffs. Such applications require a precise models of the underlying classes, but hand-crafting these models is not feasible. In this paper, we propose to instead learn the model from the data using machine learning techniques. We investigate the use of two popular methods: support vector machines and random forest classifiers. In contrast to similar approaches, we restrict ourselves to linear support vector machines. Furthermore, we train the classifiers by solving the primal, instead of dual optimization problem. Our experiments on a large dataset show that the support vector machine approach is superior to random forest in classification accuracy as well as training time

    Pengaruh Algoritma Sequential Minimal Optimization pada Support Vector Machine untuk Klasifikasi Data (Influence of Sequential Minimal Optimization Algorithm On Support Vector Machine for Data Classification)

    Get PDF
    ABSTRAKSI: Support vector machine merupakan salah satu metode supervised learning yang biasanya digunakan untuk klasifikasi data dan pada umumnya digunakan untuk menangani dataset yang memiliki dua buah kelas. Untuk memisahkan kedua kelas tersebut, digunakan sebuah bidang pemisah (hyperplane). Permasalahan muncul ketika bentuk primal dari formula pencarian hyperplane terbaik sangat sulit untuk dipecahkan, maka dari itu digunakanlah bentuk dual yang akan merubah nilai w dalam bentuk á. Permasalahan ini biasanya disebut dengan Quadratic programming. Sequential Minimal Optimization (SMO) merupakan sebuah algoritma yang dapat memecahkan quadratic programming problem (QP problem) dengan berusaha mencari nilai á dengan menggunakan analytical quadratic programming solver pada setiap langkah sehingga waktu training yang dibutuhkan lebih cepat. Dalam tugas akhir ini ditunjukkan bahwa SMO dapat melakukan waktu training yang lebih cepat dibandingkan dengan algoritma Quadratic Programming, tetapi dalam segi akurasi banyak parameter-parameter yang membuat nilai akurasi menjadi naik turun pada setiap pengujian.Kata Kunci : support vector machine, hyperplane, quadratic programming,sequential minimal optimization, bentuk primal, bentuk dualABSTRACT: Support vector machine is one method of supervised learning typically used for data classification and commonly used for handle dataset which have two classes. To separate the classes, it uses a field separator called hyperplane. Problem arise when the primal form of the formula for finding best hyperplane is very difficult to solve, hence the dual form is used to alter the value of w in the form of á. This problem is usually referred to as Quadratic programming. Sequential Minimal Optimization (SMO) is an algorithm that is used to solve quadratic programming problem (QP Problem) by trying to find the value of á with analytical quadratic programming solver in each step so the training time needed is faster. In This final project is showed that SMO can do the training time faster than Quadratic programming algorithm, but in terms of accuracy there are many parameters which makes the value of accuracy to be up and down on each test.Keyword: support vector machine, hyperplane, quadratic programming,sequential minimal optimization, primal form, dual for

    Conservative Signal Processing Architectures For Asynchronous, Distributed Optimization Part II: Example Systems

    Full text link
    This paper provides examples of various synchronous and asynchronous signal processing systems for performing optimization, utilizing the framework and elements developed in a preceding paper. The general strategy in that paper was to perform a linear transformation of stationarity conditions applicable to a class of convex and nonconvex optimization problems, resulting in algorithms that operate on a linear superposition of the associated primal and dual decision variables. The examples in this paper address various specific optimization problems including the LASSO problem, minimax-optimal filter design, the decentralized training of a support vector machine classifier, and sparse filter design for acoustic equalization. Where appropriate, multiple algorithms for solving the same optimization problem are presented, illustrating the use of the underlying framework in designing a variety of distinct classes of algorithms. The examples are accompanied by numerical simulation and a discussion of convergence

    Convolutional Support Vector Machines For Image Classification

    Get PDF
    The Convolutional Neural Network (CNN) is a machine learning model which excels in tasks that exhibit spatially local correlation of features, for example, image classification. However, as a model, it is susceptible to the issues caused by local minima, largely due to the fully-connected neural network which is typically used in the final layers for classification. This work investi- gates the effect of replacing the fully-connected neural network with a Support Vector Machine (SVM). It names the resulting model the Convolutional Support Vector Machine (CSVM) and proposes two methods for training. The first method uses a linear SVM and it is described in the primal. The second method can be used to learn a SVM with a non-linear kernel by casting the optimisation as a Multiple Kernel Learning problem. Both methods learn the convolutional filter weights in conjunction with the SVM parameters. The linear CSVM (L-CSVM) and kernelised CSVM (K-CSVM) in this work each use a single convolutional filter, however, approaches are described which may be used to extend the K-CSVM with multiple filters per layer and with multiple convolutional layers. The L-CSVM and K-CSVM show promising results on the MNIST and CIFAR-10 benchmark datasets
    • …
    corecore