588,388 research outputs found

    Regression depth and support vector machine

    Get PDF
    The regression depth method (RDM) proposed by Rousseeuw and Hubert [RH99] plays an important role in the area of robust regression for a continuous response variable. Christmann and Rousseeuw [CR01] showed that RDM is also useful for the case of binary regression. Vapnik?s convex risk minimization principle [Vap98] has a dominating role in statistical machine learning theory. Important special cases are the support vector machine (SVM), [epsilon]-support vector regression and kernel logistic regression. In this paper connections between these methods from different disciplines are investigated for the case of pattern recognition. Some results concerning the robustness of the SVM and other kernel based methods are given. --

    Supplier selection with support vector regression and twin support vector regression

    Get PDF
    Tedarikçi seçimi sorunu son zamanlarda literatürde oldukça ilgi görmektedir. Güncel literatür, yapay zeka tekniklerinin geleneksel istatistiksel yöntemlerle karşılaştırıldığında daha iyi bir performans sağladığını göstermektedir. Son zamanlarda, destek vektör makinesi, araştırmacılar tarafından çok daha fazla ilgi görse de, buna dayalı tedarikçi seçimi çalışmalarına pek sık rastlanmamaktadır. Bu çalışmada, tedarikçi kredi endeksini tahmin etmek amacıyla, destek vektör regresyon (DVR) ve ikiz destek vektör regresyon (İDVR) teknikleri kullanılmıştır. Pratikte, tedarikçi verisini içeren örneklemler sayıca oldukça yetersizdir. DVR ve İDVR daha küçük örneklemlerle analiz yapmaya uyarlanabilir. Tedarikçilerin belirlenmesinde DVR ve İDVR yöntemlerinin tahmin kesinlikleri karşılaştırılmıştır. Gerçek örnekler İDVR yönteminin DVR yöntemine kıyasla üstün olduğunu göstermektedir.Suppliers’ selection problem has attracted considerable research interest in recent years. Recent literature show that artificial intelligence techniques achieve better performance than traditional statistical methods. Recently, support vector machine has received much more attention from researchers, while studies on supplier selection based on it are few. In this paper, we applied the support vector regression (SVR) and twin support vector regression (TSVR) techniques to predict the supplier credit index. In practice, the suppliers’ samples are very insufficient. SVR and TSVR are adaptive to deal with small samples. The prediction accuracies for SVR and TSVR methods are compared to choose appropriate suppliers. The actual examples illustrate that TSVR methods are superior to SVR

    Penggunaan Support Vector Regression (Svr) pada Prediksi Return Saham Syariah Bei

    Full text link
    Pada artikel ini algoritma support vector regression (SVR) digunakan untukmendapatkan model prediksi return saham syariah di bursa efek Indonsia. Sampeladalah emiten saham dengan likuiditas tinggi selama periode 2012. Pada penelitian iniPembentukan model didasarkan pada sebuah persamaan yang menghubungkan nilaiPBV dan ROE. Variabel terikat pada model adalah nilai proporsi rerata tahunan hargasaham pada dua tahun berurutan. Data harga saham merupakan hasil perkalian Priceto book value (PBV) dan Book value (BV). Sedangkan variabel bebasnya terdiri atasBook value (BV), tingkat pengembalian ekuitas (ROE) dan proporsi deviden yangdibayarkan ke investor public (POR). Performansi model prediksi berbasis SupportVector Machine (SVR) selanjutnya dibandingkan dengan model Regresi linearberganda berbasis Ordinary Least Squares (RLB-OLS) menggunakan pengukuran nilaiMean square error dan korelasi kuadratik untuk kesesuaian model. Hasil perbandingankedua model memperlihatkan bahwa model prediksi yang didapat menggunakan modelSVR lebih baik

    e-Distance Weighted Support Vector Regression

    Full text link
    We propose a novel support vector regression approach called e-Distance Weighted Support Vector Regression (e-DWSVR).e-DWSVR specifically addresses two challenging issues in support vector regression: first, the process of noisy data; second, how to deal with the situation when the distribution of boundary data is different from that of the overall data. The proposed e-DWSVR optimizes the minimum margin and the mean of functional margin simultaneously to tackle these two issues. In addition, we use both dual coordinate descent (CD) and averaged stochastic gradient descent (ASGD) strategies to make e-DWSVR scalable to large scale problems. We report promising results obtained by e-DWSVR in comparison with existing methods on several benchmark datasets

    On a strategy to develop robust and simple tariffs from motor vehicle insurance data

    Get PDF
    The goals of this paper are twofold: we describe common features in data sets from motor vehicle insurance companies and we investigate a general strategy which exploits the knowledge of such features. The results of the strategy are a basis to develop insurance tariffs. The strategy is applied to a data set from motor vehicle insurance companies. We use a nonparametric approach based on a combination of kernel logistic regression and ¡support vector regression. --Classification,Data Mining,Insurance tariffs,Kernel logistic regression,Machine learning,Regression,Robustness,Simplicity,Support Vector Machine,Support Vector Regression

    Co-regularised support vector regression

    Get PDF
    We consider a semi-supervised learning scenario for regression, where only few labelled examples, many unlabelled instances and different data representations (multiple views) are available. For this setting, we extend support vector regression with a co-regularisation term and obtain co-regularised support vector regression (CoSVR). In addition to labelled data, co-regularisation includes information from unlabelled examples by ensuring that models trained on different views make similar predictions. Ligand affinity prediction is an important real-world problem that fits into this scenario. The characterisation of the strength of protein-ligand bonds is a crucial step in the process of drug discovery and design. We introduce variants of the base CoSVR algorithm and discuss their theoretical and computational properties. For the CoSVR function class we provide a theoretical bound on the Rademacher complexity. Finally, we demonstrate the usefulness of CoSVR for the affinity prediction task and evaluate its performance empirically on different protein-ligand datasets. We show that CoSVR outperforms co-regularised least squares regression as well as existing state-of-the-art approaches for affinity prediction
    corecore