741,345 research outputs found

    Penerapan Metode Support Vector Machine pada Sistem Deteksi Intrusi secara Real-time

    Full text link
    Intrusion detection system is a system for detecting attacks or intrusions in a network or computer system, generally intrusion detection is done with comparing network traffic pattern with known attack pattern or with finding unnormal pattern of network traffic. The raise of internet activity has increase the number of packet data that must be analyzed for build the attack or normal pattern, this situation led to the possibility that the system can not detect the intrusion with a new technique, so it needs a system that can automaticaly build a pattern or model.This research have a goal to build an intrusion detection system with ability to create a model automaticaly and can detect the intrusion in real-time environment with using support vector machine method as a one of data mining method for classifying network traffic audit data in 3 classes, namely: normal, probe, and DoS. Audit data was established from preprocessing of network packet capture files that obtained from Tshark. Based on the test result, the sistem can help system administrator to build a model or pattern automaticaly with high accuracy, high attack detection rate, and low false positive rate. The sistem also can run in real-time environment

    Support vector machine for functional data classification

    Get PDF
    In many applications, input data are sampled functions taking their values in infinite dimensional spaces rather than standard vectors. This fact has complex consequences on data analysis algorithms that motivate modifications of them. In fact most of the traditional data analysis tools for regression, classification and clustering have been adapted to functional inputs under the general name of functional Data Analysis (FDA). In this paper, we investigate the use of Support Vector Machines (SVMs) for functional data analysis and we focus on the problem of curves discrimination. SVMs are large margin classifier tools based on implicit non linear mappings of the considered data into high dimensional spaces thanks to kernels. We show how to define simple kernels that take into account the unctional nature of the data and lead to consistent classification. Experiments conducted on real world data emphasize the benefit of taking into account some functional aspects of the problems.Comment: 13 page

    Support Vector Machine untuk Klasifikasi Penutup Lahan Menggunakan Citra Radarsat 2 dengan Dual Polarisasi Hh-hv

    Full text link
    Radarsat 2 citra. satelit yang memiliki spesifikasi baru dan lebih baik dari generasi tua dengan resolusi spasial tinggi. Radarsat 2 dapat digunakan untuk klasifikasi tutupan lahan dengan mesin vektor Dukungan algoritma bahwa mesin pembelajaran algoritma yang akan menjadi alternatif kecuali kemungkinan maksimum bahkan algoritma yang tidak dapat digunakan untuk dual polarisasi HH-HV yang SVM bisa mengatasinya. Support Vector Machine algoritma dapat memberikan informasi tutupan lahan otomatis yang dapat mengembangkan metode aplikasi untuk monitoring tutupan lahan untuk negara tropis seperti Indonesia. Metode yang kami gunakan untuk mengumpulkan Region of Interest dan uji validasi oleh survei lapangan. Penelitian ini menggunakan smartphone GPS android dengan 120 tempat sampel dengan perhitungan error 6,5%. Setelah itu kita menguji akurasi dengan meja kebingungan matriks. Hasil tutupan lahan klasifikasi menunjukkan bahwa klasifikasi keseluruhan adalah 66,67% dan memiliki koefisien kappa sebagai 0,55821446 yang menunjukkan hasil mengindikasikan tidak sesuai dengan

    Regression depth and support vector machine

    Get PDF
    The regression depth method (RDM) proposed by Rousseeuw and Hubert [RH99] plays an important role in the area of robust regression for a continuous response variable. Christmann and Rousseeuw [CR01] showed that RDM is also useful for the case of binary regression. Vapnik?s convex risk minimization principle [Vap98] has a dominating role in statistical machine learning theory. Important special cases are the support vector machine (SVM), [epsilon]-support vector regression and kernel logistic regression. In this paper connections between these methods from different disciplines are investigated for the case of pattern recognition. Some results concerning the robustness of the SVM and other kernel based methods are given. --

    MRI brain classification using support vector machine

    Get PDF
    The field of medical imaging gains its importance with increase in the need of automated and efficient diagnosis in a short period of time. Other than that, medical image retrieval system is to provide a tool for radiologists to retrieve the images similar to query image in content. Magnetic resonance imaging (MRI) is an imaging technique that has played an important role in neuroscience research for studying brain images. Classification is an important part in retrieval system in order to distinguish between normal patients and those who have the possibility of having abnormalities or tumor. In this paper, we have obtained the feature related to MRI images using discrete wavelet transformation. An advanced kernel based techniques such as Support Vector Machine (SVM) for the classification of volume of MRI data as normal and abnormal will be deployed

    Model Selection for Support Vector Machine Classification

    Get PDF
    We address the problem of model selection for Support Vector Machine (SVM) classification. For fixed functional form of the kernel, model selection amounts to tuning kernel parameters and the slack penalty coefficient CC. We begin by reviewing a recently developed probabilistic framework for SVM classification. An extension to the case of SVMs with quadratic slack penalties is given and a simple approximation for the evidence is derived, which can be used as a criterion for model selection. We also derive the exact gradients of the evidence in terms of posterior averages and describe how they can be estimated numerically using Hybrid Monte Carlo techniques. Though computationally demanding, the resulting gradient ascent algorithm is a useful baseline tool for probabilistic SVM model selection, since it can locate maxima of the exact (unapproximated) evidence. We then perform extensive experiments on several benchmark data sets. The aim of these experiments is to compare the performance of probabilistic model selection criteria with alternatives based on estimates of the test error, namely the so-called ``span estimate'' and Wahba's Generalized Approximate Cross-Validation (GACV) error. We find that all the ``simple'' model criteria (Laplace evidence approximations, and the Span and GACV error estimates) exhibit multiple local optima with respect to the hyperparameters. While some of these give performance that is competitive with results from other approaches in the literature, a significant fraction lead to rather higher test errors. The results for the evidence gradient ascent method show that also the exact evidence exhibits local optima, but these give test errors which are much less variable and also consistently lower than for the simpler model selection criteria
    corecore