487 research outputs found

    An Efficient Hidden Markov Model for Offline Handwritten Numeral Recognition

    Full text link
    Traditionally, the performance of ocr algorithms and systems is based on the recognition of isolated characters. When a system classifies an individual character, its output is typically a character label or a reject marker that corresponds to an unrecognized character. By comparing output labels with the correct labels, the number of correct recognition, substitution errors misrecognized characters, and rejects unrecognized characters are determined. Nowadays, although recognition of printed isolated characters is performed with high accuracy, recognition of handwritten characters still remains an open problem in the research arena. The ability to identify machine printed characters in an automated or a semi automated manner has obvious applications in numerous fields. Since creating an algorithm with a one hundred percent correct recognition rate is quite probably impossible in our world of noise and different font styles, it is important to design character recognition algorithms with these failures in mind so that when mistakes are inevitably made, they will at least be understandable and predictable to the person working with theComment: 6pages, 5 figure

    An experimental study on the deformation behaviour and fracture mode of recycled aluminium alloy AA6061-reinforced alumina oxide undergoing high-velocity impact

    Get PDF
    The anisotropic behaviour and the damage evolution of recycled aluminium alloy-reinforced alumina oxide are investigated in this paper using Taylor impact test. The test is performed at various impact velocity ranging from 190 to 360 m/s by firing a cylindrical projectile towards anvil target. The deformation behaviour and the fracture modes are analysed using the digitized footprint of the deformed specimens. The damage initiation and the progression are observed around the impact surface and the surface 0.5 cm from the impact area using the scanning electron microscope. The deformed specimens showed several ductile fracture modes of mushrooming, tensile splitting and petalling. The critical impact velocity is defined below 280 m/s. The specimens showed a strong strain-rate dependency due to the damage evolution that is driven by severe localized plastic-strain deformation. The scanning electron microscope analysis showed the damage mechanism progress via voids initiation, growth and coalescence in the material. The micrograph within the footprint surface shows the presence of alumina oxide particles within the specimen. The microstructure analysis shows a significant refinement of the specimen particle at the surface located 0.5 cm above the impact area. ImageJ software is adopted in this work to measure the average size of voids within this surface. Non-symmetrical (ellipse-shaped) footprint around the footprints showed plastic anisotropic behaviour. The results in this paper provide a better understanding of the deformation behaviour of recycled materials subjected to dynamic loading. This information on mechanical response is crucial before any potential application can be established to substitute the primary sources

    TEXT CONTENT DEPENDENT WRITER IDENTIFICATION

    Get PDF
    Text content based personal Identification system is vital in resolving problem of identifying unknown document’s writer using a set of handwritten samples from alleged known writers. Text written on paper document is usually captured as image by scanner or camera for computer processing. The most challenging problem encounter in text image processing is extraction of robust feature vector from a set of inconstant handwritten text images obtained from the same writer at different time. In this work new feature extraction method is engaged to produce active text features for developing an effective personal identification system. The feature formed feature vector which is fed as input data into classification algorithm based on Support Vector Machine (SVM). Experiment was conducted to identify writers of query handwritten texts. Result show satisfactory performance of the proposed system, it was able to identify writers of query handwritten texts

    Picasso, Matisse, or a Fake? Automated Analysis of Drawings at the Stroke Level for Attribution and Authentication

    Full text link
    This paper proposes a computational approach for analysis of strokes in line drawings by artists. We aim at developing an AI methodology that facilitates attribution of drawings of unknown authors in a way that is not easy to be deceived by forged art. The methodology used is based on quantifying the characteristics of individual strokes in drawings. We propose a novel algorithm for segmenting individual strokes. We designed and compared different hand-crafted and learned features for the task of quantifying stroke characteristics. We also propose and compare different classification methods at the drawing level. We experimented with a dataset of 300 digitized drawings with over 80 thousands strokes. The collection mainly consisted of drawings of Pablo Picasso, Henry Matisse, and Egon Schiele, besides a small number of representative works of other artists. The experiments shows that the proposed methodology can classify individual strokes with accuracy 70%-90%, and aggregate over drawings with accuracy above 80%, while being robust to be deceived by fakes (with accuracy 100% for detecting fakes in most settings)

    Multibiometric security in wireless communication systems

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University, 05/08/2010.This thesis has aimed to explore an application of Multibiometrics to secured wireless communications. The medium of study for this purpose included Wi-Fi, 3G, and WiMAX, over which simulations and experimental studies were carried out to assess the performance. In specific, restriction of access to authorized users only is provided by a technique referred to hereafter as multibiometric cryptosystem. In brief, the system is built upon a complete challenge/response methodology in order to obtain a high level of security on the basis of user identification by fingerprint and further confirmation by verification of the user through text-dependent speaker recognition. First is the enrolment phase by which the database of watermarked fingerprints with memorable texts along with the voice features, based on the same texts, is created by sending them to the server through wireless channel. Later is the verification stage at which claimed users, ones who claim are genuine, are verified against the database, and it consists of five steps. Initially faced by the identification level, one is asked to first present one’s fingerprint and a memorable word, former is watermarked into latter, in order for system to authenticate the fingerprint and verify the validity of it by retrieving the challenge for accepted user. The following three steps then involve speaker recognition including the user responding to the challenge by text-dependent voice, server authenticating the response, and finally server accepting/rejecting the user. In order to implement fingerprint watermarking, i.e. incorporating the memorable word as a watermark message into the fingerprint image, an algorithm of five steps has been developed. The first three novel steps having to do with the fingerprint image enhancement (CLAHE with 'Clip Limit', standard deviation analysis and sliding neighborhood) have been followed with further two steps for embedding, and extracting the watermark into the enhanced fingerprint image utilising Discrete Wavelet Transform (DWT). In the speaker recognition stage, the limitations of this technique in wireless communication have been addressed by sending voice feature (cepstral coefficients) instead of raw sample. This scheme is to reap the advantages of reducing the transmission time and dependency of the data on communication channel, together with no loss of packet. Finally, the obtained results have verified the claims

    Biometric Person Identification Using Near-infrared Hand-dorsa Vein Images

    Get PDF
    Biometric recognition is becoming more and more important with the increasing demand for security, and more usable with the improvement of computer vision as well as pattern recognition technologies. Hand vein patterns have been recognised as a good biometric measure for personal identification due to many excellent characteristics, such as uniqueness and stability, as well as difficulty to copy or forge. This thesis covers all the research and development aspects of a biometric person identification system based on near-infrared hand-dorsa vein images. Firstly, the design and realisation of an optimised vein image capture device is presented. In order to maximise the quality of the captured images with relatively low cost, the infrared illumination and imaging theory are discussed. Then a database containing 2040 images from 102 individuals, which were captured by this device, is introduced. Secondly, image analysis and the customised image pre-processing methods are discussed. The consistency of the database images is evaluated using mean squared error (MSE) and peak signal-to-noise ratio (PSNR). Geometrical pre-processing, including shearing correction and region of interest (ROI) extraction, is introduced to improve image consistency. Image noise is evaluated using total variance (TV) values. Grey-level pre-processing, including grey-level normalisation, filtering and adaptive histogram equalisation are applied to enhance vein patterns. Thirdly, a gradient-based image segmentation algorithm is compared with popular algorithms in references like Niblack and Threshold Image algorithm to demonstrate its effectiveness in vein pattern extraction. Post-processing methods including morphological filtering and thinning are also presented. Fourthly, feature extraction and recognition methods are investigated, with several new approaches based on keypoints and local binary patterns (LBP) proposed. Through comprehensive comparison with other approaches based on structure and texture features as well as performance evaluation using the database created with 2040 images, the proposed approach based on multi-scale partition LBP is shown to provide the best recognition performance with an identification rate of nearly 99%. Finally, the whole hand-dorsa vein identification system is presented with a user interface for administration of user information and for person identification

    Automated framework for robust content-based verification of print-scan degraded text documents

    Get PDF
    Fraudulent documents frequently cause severe financial damages and impose security breaches to civil and government organizations. The rapid advances in technology and the widespread availability of personal computers has not reduced the use of printed documents. While digital documents can be verified by many robust and secure methods such as digital signatures and digital watermarks, verification of printed documents still relies on manual inspection of embedded physical security mechanisms.The objective of this thesis is to propose an efficient automated framework for robust content-based verification of printed documents. The principal issue is to achieve robustness with respect to the degradations and increased levels of noise that occur from multiple cycles of printing and scanning. It is shown that classic OCR systems fail under such conditions, moreover OCR systems typically rely heavily on the use of high level linguistic structures to improve recognition rates. However inferring knowledge about the contents of the document image from a-priori statistics is contrary to the nature of document verification. Instead a system is proposed that utilizes specific knowledge of the document to perform highly accurate content verification based on a Print-Scan degradation model and character shape recognition. Such specific knowledge of the document is a reasonable choice for the verification domain since the document contents are already known in order to verify them.The system analyses digital multi font PDF documents to generate a descriptive summary of the document, referred to as \Document Description Map" (DDM). The DDM is later used for verifying the content of printed and scanned copies of the original documents. The system utilizes 2-D Discrete Cosine Transform based features and an adaptive hierarchical classifier trained with synthetic data generated by a Print-Scan degradation model. The system is tested with varying degrees of Print-Scan Channel corruption on a variety of documents with corruption produced by repetitive printing and scanning of the test documents. Results show the approach achieves excellent accuracy and robustness despite the high level of noise
    • 

    corecore