138 research outputs found

    Online Signature Verification using SVD Method

    Get PDF
    Online signature verification rests on hypothesis which any writer has similarity among signature samples, with scale variability and small distortion. This is a dynamic method in which users sign and then biometric system recognizes the signature by analyzing its characters such as acceleration, pressure, and orientation. The proposed technique for online signature verification is based on the Singular Value Decomposition (SVD) technique which involves four aspects: I) data acquisition and preprocessing 2) feature extraction 3) matching (classification), 4) decision making. The SVD is used to find r-singular vectors sensing the maximal energy of the signature data matrix A, called principle subspace thus account for most of the variation in the original data. Having modeled the signature through its r-th principal subspace, the authenticity of the tried signature can be determined by calculating the average distance between its principal subspace and the template signature. The input device used for this signature verification system is 5DT Data Glove 14 Ultra which is originally design for virtual reality application. The output of the data glove, which captures the dynamic process in the signing action, is the data matrix, A to be processed for feature extraction and matching. This work is divided into two parts. In part I, we investigate the performance of the SVD-based signature verification system using a new matching technique, that is, by calculating the average distance between the different subspaces. In part IJ, we investigate the performance of the signature verification with reducedsensor data glove. To select the 7-most prominent sensors of the data glove, we calculate the F-value for each sensor and choose 7 sensors that gives the highest Fvalue

    Online Signature Verification using SVD Method

    Get PDF
    Online signature verification rests on hypothesis which any writer has similarity among signature samples, with scale variability and small distortion. This is a dynamic method in which users sign and then biometric system recognizes the signature by analyzing its characters such as acceleration, pressure, and orientation. The proposed technique for online signature verification is based on the Singular Value Decomposition (SVD) technique which involves four aspects: I) data acquisition and preprocessing 2) feature extraction 3) matching (classification), 4) decision making. The SVD is used to find r-singular vectors sensing the maximal energy of the signature data matrix A, called principle subspace thus account for most of the variation in the original data. Having modeled the signature through its r-th principal subspace, the authenticity of the tried signature can be determined by calculating the average distance between its principal subspace and the template signature. The input device used for this signature verification system is 5DT Data Glove 14 Ultra which is originally design for virtual reality application. The output of the data glove, which captures the dynamic process in the signing action, is the data matrix, A to be processed for feature extraction and matching. This work is divided into two parts. In part I, we investigate the performance of the SVD-based signature verification system using a new matching technique, that is, by calculating the average distance between the different subspaces. In part IJ, we investigate the performance of the signature verification with reducedsensor data glove. To select the 7-most prominent sensors of the data glove, we calculate the F-value for each sensor and choose 7 sensors that gives the highest Fvalue

    Exact and approximate maximum inner product search with LEMP

    Full text link
    We study exact and approximate methods for maximum inner product search, a fundamental problem in a number of data mining and information retrieval tasks. We propose the LEMP framework, which supports both exact and approximate search with quality guarantees. At its heart, LEMP transforms a maximum inner product search problem over a large database of vectors into a number of smaller cosine similarity search problems. This transformation allows LEMP to prune large parts of the search space immediately and to select suitable search algorithms for each of the remaining problems individually. LEMP is able to leverage existing methods for cosine similarity search, but we also provide a number of novel search algorithms tailored to our setting. We conducted an extensive experimental study that provides insight into the performance of many state-of-the-art techniques—including LEMP—on multiple real-world datasets. We found that LEMP often was significantly faster or more accurate than alternative methods

    Speech Analysis using Relative Spectral Filtering (RASTA) and Dynamic Time Warping (DTW) methods

    Get PDF
    This work consists of analysis of speech using RASTA and DTW methods. The analysis is based on the speech recognition. Speech recognition converts identified words or speech in spoken language into computer-readable format. The first speech recognition has been developed in the year of 1950s. The variation of speech spoken by individual becomes the main challenge for the speech recognition. Speech recognition has application in many areas such as customer call centers and as a medium in helping those with learning disabilities. This work presents an analysis of speech for Malay single words. There are three stages in speech recognition which are analysis, feature extraction and modeling. The Relative Spectral Filtering (RASTA) is used as the method for feature extraction. RASTA is a method that subsidized the undesirable and additive noise in speech recognition. Dynamic Time Warping (DTW) method is used as the modelling technique

    Intra-modal Score level Fusion for Off-line Signature Verification

    Get PDF
    Signature is widely used as a means of personal verification which emphasizes the need for a signature verification system. Often the single signature feature may produce unacceptable error rates. In this paper, Intra-modal Score level Fusion for Off-line Signature Verification (ISFOSV) is proposed. The scanned signature image is skeletonized and exact signature area is obtained by preprocessing. In the first stage 60 centers of signature are extracted by horizontal and vertical splitting. In the second stage the 168 features are extracted in two phases. The phase one consists of dividing the signature into 128 blocks using the center of signature by counting the number of black pixels and the angular feature in each block is determined to generate 128 angular features. In the second phase the signature is divided into 40 blocks from each of the four corners of the signature to generate 40 angular features. Totally 168 angular features are extracted from phase one and two to verify the signature. The centers of signature are compared using correlation and the distance between the angular features of the genuine and test signatures is computed. The correlation matching score and distance matching score of the signature are fused to verify the authenticity. A mathematical model is proposed to further optimize the results. It is observed that the proposed model has better FAR, FRR and EER values compared to the existing algorithms

    A Machine Learning Approach for Plagiarism Detection

    Get PDF
    Plagiarism detection is gaining increasing importance due to requirements for integrity in education. The existing research has investigated the problem of plagrarim detection with a varying degree of success. The literature revealed that there are two main methods for detecting plagiarism, namely extrinsic and intrinsic. This thesis has developed two novel approaches to address both of these methods. Firstly a novel extrinsic method for detecting plagiarism is proposed. The method is based on four well-known techniques namely Bag of Words (BOW), Latent Semantic Analysis (LSA), Stylometry and Support Vector Machines (SVM). The LSA application was fine-tuned to take in the stylometric features (most common words) in order to characterise the document authorship as described in chapter 4. The results revealed that LSA based stylometry has outperformed the traditional LSA application. Support vector machine based algorithms were used to perform the classification procedure in order to predict which author has written a particular book being tested. The proposed method has successfully addressed the limitations of semantic characteristics and identified the document source by assigning the book being tested to the right author in most cases. Secondly, the intrinsic detection method has relied on the use of the statistical properties of the most common words. LSA was applied in this method to a group of most common words (MCWs) to extract their usage patterns based on the transitivity property of LSA. The feature sets of the intrinsic model were based on the frequency of the most common words, their relative frequencies in series, and the deviation of these frequencies across all books for a particular author. The Intrinsic method aims to generate a model of author “style” by revealing a set of certain features of authorship. The model’s generation procedure focuses on just one author as an attempt to summarise aspects of an author’s style in a definitive and clear-cut manner. The thesis has also proposed a novel experimental methodology for testing the performance of both extrinsic and intrinsic methods for plagiarism detection. This methodology relies upon the CEN (Corpus of English Novels) training dataset, but divides that dataset up into training and test datasets in a novel manner. Both approaches have been evaluated using the well-known leave-one-out-cross-validation method. Results indicated that by integrating deep analysis (LSA) and Stylometric analysis, hidden changes can be identified whether or not a reference collection exists

    An integrated system for quantitatively characterizing different handgrips and identifying their cortical substrates

    Get PDF
    Motor recovery of hand function in stroke patients requires months of regular rehabilitation therapy, and is often not measured in a quantitative manner. The first goal of this project was to design a system that can quantitatively track hand movements and, in practice, related changes in hand movements over time. The second goal of this project was to acquire hand and finger movement data during functional imaging (in our case we used magnetoencephalography (MEG)) to be used for characterizing cortical plasticity associated with training. To achieve these goals, for each hand, finger flexion and extension were measured with a data glove and wrist rotation was calculated using an accelerometer. To accomplish the first goal of the project, we designed and implemented Matlab algorithms for the acquisition of behavioral data on different handgrips, specifically power and precision grips. We compiled a set of 52 objects (26 man-made and 26 natural), displayed one at the time on a computer screen, and the subject was asked to form the appropriate handgrip for picking up the object image presented. To accomplish the second goal, we used the setup described above during an MEG scanning session. The timescales for the signals from the glove, accelerometer, and MEG were synchronized and the data analyzed using Brainstorm. We validated proper functionality of the system by demonstrating that the glove and accelerometer data during handgrip formation correspond to the appropriate neural responses
    • …
    corecore