76,992 research outputs found
Iris recognition system by using support vector machines
In recent years, with the increasing demands of security in our networked society, biometric systems for user verification are becoming more popular. Iris recognition system is a new technology for user verification. In this paper, the CASIA iris database is used for individual userpsilas verification by using support vector machines (SVMs) which based on the analysis of iris code as feature extraction is discussed. This feature is then used to recognize authentic users and to reject impostors. Support Vector Machines (SVMs) technique was used for the classification process. The proposed method is evaluated based upon False Rejection Rate (FRR) and False Acceptance Rate (FAR) and the experimental result show that this technique produces good performance
Iris Recognition System Using Support Vector Machines
In recent years, with the increasing demands of
security in our networked society, biometric systems
for user verification are becoming more popular. Iris
recognition system is a new technology for user
verification. In this paper, the CASIA iris database is
used for individual user’s verification by using support
vector machines (SVMs) which based on the analysis of iris code as feature extraction is discussed. This feature is then used to recognize authentic users and to reject impostors. Support Vector Machines (SVMs) technique was used for the classification process. The proposed method is evaluated based upon False Rejection Rate (FRR) and False Acceptance Rate (FAR) and the experimental result show that this technique produces good performance
Predicting Events Surrounding the Egyptian Revolution of 2011 Using Learning Algorithms on Micro Blog Data
We aim to predict activities of political nature in Egypt which influence or reflect societal-scale behavior and beliefs by using learning algorithms on Twitter data. We focus on capturing domestic events in Egypt from November 2009 to November 2013. To this extent we study underlying communication patterns by evaluating content-based and meta-data information in classification tasks without targeting specific keywords or users. Classification is done using Support Vector Machines (SVM) and Support Distribution Machines (SDM). Latent Dirichlet Allocation (LDA) is used to create content-based input patterns for the classifiers while bags of Twitter meta-information are used with the SDM to classify meta-data features. The experiments reveal that user centric approaches based on metadata can outperform methods employing content-based input despite the use of well established natural language processing algorithms. The results show that distributions over users-centric meta information provides an important signal when detecting and predicting events
SVM Classifier – a comprehensive java interface for support vector machine classification of microarray data
MOTIVATION: Graphical user interface (GUI) software promotes novelty by allowing users to extend the functionality. SVM Classifier is a cross-platform graphical application that handles very large datasets well. The purpose of this study is to create a GUI application that allows SVM users to perform SVM training, classification and prediction. RESULTS: The GUI provides user-friendly access to state-of-the-art SVM methods embodied in the LIBSVM implementation of Support Vector Machine. We implemented the java interface using standard swing libraries. We used a sample data from a breast cancer study for testing classification accuracy. We achieved 100% accuracy in classification among the BRCA1–BRCA2 samples with RBF kernel of SVM. CONCLUSION: We have developed a java GUI application that allows SVM users to perform SVM training, classification and prediction. We have demonstrated that support vector machines can accurately classify genes into functional categories based upon expression data from DNA microarray hybridization experiments. Among the different kernel functions that we examined, the SVM that uses a radial basis kernel function provides the best performance. The SVM Classifier is available at
Implementasi dan Analisa Granular Support Vector Machines dengan Data Cleaning (GSVM-DC) untuk E-mail Spam Filtering
ABSTRAKSI: E-mail spam adalah pembanjiran internet dengan banyak salinan pesan yang sama, dan memaksa pengguna internet untuk menerimanya walupun itu tidak dinginkan. Pengguna e-mail membutuhkan waktu yang lebih banyak untuk membaca dan memutuskan apakah e-mail yang diterima tersebut adalah spam atau bukan. Oleh karena itu, banyak dikembangkan e-mail spam filtering. Dalam tugas akhir ini, dibangun sebuah sistem e-mail spam filtering dengan metode klasifikasi menggunakan Support Vector Machine yang di tambahkan metode Granular Computing dan Data Cleaning. Penambahan metode tersebut diharapkan dapat meningkatkan akurasi pada pengidentifikasian e-mail spam. Kata Kunci : e-mail spam, klasifikasi, granular computing, support vector machines, data cleaning, support vector machine.ABSTRACT: E-mail spam is flooding the Internet with many copies of the same message, forcing users to accept it even though it was not cool. E-mail users need more time to read and decide whether the e-mail received is spam or not. Therefore, many developed the e-mail spam filtering. In this final project, we built a system of filtering spam e-mail with a classification method using a Support Vector Machine in the add method of Granular Computing and Data Cleaning. By Adding the methods are expected to improve the accuracy in identifying spam e-mail. Keyword: e-mail spam, classification, granular computing, support vector machines, data cleaning, support vector machine
HypeRvieW: an open source desktop application for hyperspectral remote-sensing data processing
In this article, we present a desktop application for the analysis, reference data generation, registration, and supervised spatial-spectral classification of hyperspectral remote-sensing images through a simple and intuitive interface. Regarding the classification ability, the different classification schemes are implemented by using a chain structure as a base. It consists of five configurable stages that must be executed in a fixed order: preprocessing, spatial processing, pixel-wise classification, combination, and post-processing. The modular implementation makes its extension easy by adding new algorithms for each stage or new classification chains. The tool has been designed as a platform that is open to the incorporation of algorithms by the users interested in comparing classification schemes. As an example of use, a classification scheme based on the Quick Shift (QS) algorithm for segmentation and on Extreme Learning Machines (ELMs) or Support Vector Machines (SVMs) for classification is also proposed. The application is license-free, runs on the Linux operating system, and was developed in C language using the GTK library, as well as other free libraries to build the graphical user interfaces (GUIs)This work was supported by the Xunta de Galicia, Programme for Consolidation of Competitive Research Groups [2014/008]; Ministry of Science and Innovation, Government of Spain, cofounded by the FEDER funds of European Union [TIN2013-41129-P]S
Recommended from our members
Free-text keystroke dynamics authentication for Arabic language
This study introduces an approach for user authentication using free-text keystroke dynamics which incorporates text in Arabic language. The Arabic language has completely different characteristics to those of English. The approach followed in this study involves the use of the keyboard's key-layout. The method extracts timing features from specific key-pairs in the typed text. Decision trees were exploited to classify each of the users' data. In parallel for comparison, support vector machines were also used for classification in association with an ant colony optimisation feature selection technique. The results obtained from this study are encouraging as low false accept rates and false reject rates were achieved in the experimentation phase. This signifies that satisfactory overall system performance was achieved by using the typing attributes in the proposed approach, while typing Arabic text
The Google Similarity Distance
Words and phrases acquire meaning from the way they are used in society, from
their relative semantics to other words and phrases. For computers the
equivalent of `society' is `database,' and the equivalent of `use' is `way to
search the database.' We present a new theory of similarity between words and
phrases based on information distance and Kolmogorov complexity. To fix
thoughts we use the world-wide-web as database, and Google as search engine.
The method is also applicable to other search engines and databases. This
theory is then applied to construct a method to automatically extract
similarity, the Google similarity distance, of words and phrases from the
world-wide-web using Google page counts. The world-wide-web is the largest
database on earth, and the context information entered by millions of
independent users averages out to provide automatic semantics of useful
quality. We give applications in hierarchical clustering, classification, and
language translation. We give examples to distinguish between colors and
numbers, cluster names of paintings by 17th century Dutch masters and names of
books by English novelists, the ability to understand emergencies, and primes,
and we demonstrate the ability to do a simple automatic English-Spanish
translation. Finally, we use the WordNet database as an objective baseline
against which to judge the performance of our method. We conduct a massive
randomized trial in binary classification using support vector machines to
learn categories based on our Google distance, resulting in an a mean agreement
of 87% with the expert crafted WordNet categories.Comment: 15 pages, 10 figures; changed some text/figures/notation/part of
theorem. Incorporated referees comments. This is the final published version
up to some minor changes in the galley proof
- …