5,616 research outputs found

    Signature Verification Approach using Fusion of Hybrid Texture Features

    Full text link
    In this paper, a writer-dependent signature verification method is proposed. Two different types of texture features, namely Wavelet and Local Quantized Patterns (LQP) features, are employed to extract two kinds of transform and statistical based information from signature images. For each writer two separate one-class support vector machines (SVMs) corresponding to each set of LQP and Wavelet features are trained to obtain two different authenticity scores for a given signature. Finally, a score level classifier fusion method is used to integrate the scores obtained from the two one-class SVMs to achieve the verification score. In the proposed method only genuine signatures are used to train the one-class SVMs. The proposed signature verification method has been tested using four different publicly available datasets and the results demonstrate the generality of the proposed method. The proposed system outperforms other existing systems in the literature.Comment: Neural Computing and Applicatio

    Freeman chain code as representation in offline signature verification system

    Get PDF
    Over recent years, there has been an explosive growth of interest in the pattern recognition. For example, handwritten signature is one of human biometric that can be used in many areas in terms of access control and security. However, handwritten signature is not a uniform characteristic such as fingerprint, iris or vein. It may change to several factors; mood, environment and age. Signature Verification System (SVS) is a part of pattern recognition that can be a solution for such situation. The system can be decomposed into three stages: data acquisition and preprocessing, feature extraction and verification. This paper presents techniques for SVS that uses Freeman chain code (FCC) as data representation. In the first part of feature extraction stage, the FCC was extracted by using boundary-based style on the largest contiguous part of the signature images. The extracted FCC was divided into four, eight or sixteen equal parts. In the second part of feature extraction, six global features were calculated. Finally, verification utilized k-Nearest Neighbour (k-NN) to test the performance. MCYT bimodal database was used in every stage in the system. Based on our systems, the best result achieved was False Rejection Rate (FRR) 14.67%, False Acceptance Rate (FAR) 15.83% and Equal Error Rate (EER) 0.43% with shortest computation, 7.53 seconds and 47 numbers of features

    Symbolic Exact Inference for Discrete Probabilistic Programs

    Full text link
    The computational burden of probabilistic inference remains a hurdle for applying probabilistic programming languages to practical problems of interest. In this work, we provide a semantic and algorithmic foundation for efficient exact inference on discrete-valued finite-domain imperative probabilistic programs. We leverage and generalize efficient inference procedures for Bayesian networks, which exploit the structure of the network to decompose the inference task, thereby avoiding full path enumeration. To do this, we first compile probabilistic programs to a symbolic representation. Then we adapt techniques from the probabilistic logic programming and artificial intelligence communities in order to perform inference on the symbolic representation. We formalize our approach, prove it sound, and experimentally validate it against existing exact and approximate inference techniques. We show that our inference approach is competitive with inference procedures specialized for Bayesian networks, thereby expanding the class of probabilistic programs that can be practically analyzed

    Biometric signature verification system based on freeman chain code and k-nearest neighbor

    Get PDF
    Signature is one of human biometrics that may change due to some factors, for example age, mood and environment, which means two signatures from a person cannot perfectly matching each other. A Signature Verification System (SVS) is a solution for such situation. The system can be decomposed into three stages: data acquisition and preprocessing, feature extraction and verification. This paper presents techniques for SVS that uses Freeman chain code (FCC) as data representation. Before extracting the features, the raw images will undergo preprocessing stage; binarization, noise removal, cropping and thinning. In the first part of feature extraction stage, the FCC was extracted by using boundary-based style on the largest contiguous part of the signature images. The extracted FCC was divided into four, eight or sixteen equal parts. In the second part of feature extraction, six global features were calculated against split image to test the feature efficiency. Finally, verification utilized Euclidean distance to measured and matched in k-Nearest Neighbors. MCYT bimodal database was used in every stage in the system. Based on the experimental results, the lowest error rate for FRR and FAR were 6.67 % and 12.44 % with AER 9.85 % which is better in term of performance compared to other works using that same database

    Classifying sequences by the optimized dissimilarity space embedding approach: a case study on the solubility analysis of the E. coli proteome

    Full text link
    We evaluate a version of the recently-proposed classification system named Optimized Dissimilarity Space Embedding (ODSE) that operates in the input space of sequences of generic objects. The ODSE system has been originally presented as a classification system for patterns represented as labeled graphs. However, since ODSE is founded on the dissimilarity space representation of the input data, the classifier can be easily adapted to any input domain where it is possible to define a meaningful dissimilarity measure. Here we demonstrate the effectiveness of the ODSE classifier for sequences by considering an application dealing with the recognition of the solubility degree of the Escherichia coli proteome. Solubility, or analogously aggregation propensity, is an important property of protein molecules, which is intimately related to the mechanisms underlying the chemico-physical process of folding. Each protein of our dataset is initially associated with a solubility degree and it is represented as a sequence of symbols, denoting the 20 amino acid residues. The herein obtained computational results, which we stress that have been achieved with no context-dependent tuning of the ODSE system, confirm the validity and generality of the ODSE-based approach for structured data classification.Comment: 10 pages, 49 reference

    SynSig2Vec: Learning Representations from Synthetic Dynamic Signatures for Real-world Verification

    Full text link
    An open research problem in automatic signature verification is the skilled forgery attacks. However, the skilled forgeries are very difficult to acquire for representation learning. To tackle this issue, this paper proposes to learn dynamic signature representations through ranking synthesized signatures. First, a neuromotor inspired signature synthesis method is proposed to synthesize signatures with different distortion levels for any template signature. Then, given the templates, we construct a lightweight one-dimensional convolutional network to learn to rank the synthesized samples, and directly optimize the average precision of the ranking to exploit relative and fine-grained signature similarities. Finally, after training, fixed-length representations can be extracted from dynamic signatures of variable lengths for verification. One highlight of our method is that it requires neither skilled nor random forgeries for training, yet it surpasses the state-of-the-art by a large margin on two public benchmarks.Comment: To appear in AAAI 202

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems

    Applications of interval analysis to selected topics in statistical computing

    Get PDF
    In interval analysis, an interval is treated not only as a set of numbers, but as a number in and of itself. The development of interval analysis is closely connected to the development of electronic digital computers. Conventional electronic computation is typically performed using a fixed-precision, floating-point processor. This approach is a finite approximation to calculations with real numbers of infinite precision. The finite approximation leads to errors of various types. While the fundamental operations of addition, subtraction, multiplication and division are typically accurate to one-half unit-last-place in floating-point computations, the effect of cumulative error in repeated calculations is usually unknown and too-frequently ignored. Using interval analysis, an interval is constructed which (after each computation) is guaranteed to contain the true value. By seeking ways to keep the interval narrow, it is possible to obtain results which are of guaranteed accuracy;This dissertation uses interval analysis in topics of statistical computing. Two major topics are addressed: bounding computational errors and global optimization;For bounding computational errors, series are used which yield a bound on the truncation error which results from a finite series approximation to an infinite series. By evaluating the series with intervals to bound rounding errors and by using the bound on the truncation error, an interval is obtained which is guaranteed to contain the true value. For some series, interval numerical quadrature rules are also employed. These ideas are applied to the computation of tail probabilities and critical points of several statistical distributions such as Bivariate Chi-Square and Bivariate F distributions;As regards to global optimization, the EM algorithm is one tool frequently used for optimization in Statistics and Probability; The EM algorithm is fairly flexible and is able to handle missing data. However, as with most optimization algorithms, there is no guarantee of finding a global optimum. Interval analysis can be used to compute an enclosure of the range of a function over a specified domain. By enclosing the range of the gradient of the loglikelihood, those parts of the parameter space where the gradient is nonzero can be eliminated as not containing stationary points. An algorithm proceeds by repeatedly bisecting an initial region into smaller regions which are evaluated for the possibility of the gradient being nonzero. Upon termination, all stationary points of the loglikelihood are contained in the remaining regions
    corecore