521 research outputs found

    Recognition of off-line arabic handwritten dates and numeral strings

    Get PDF
    In this thesis, we present an automatic recognition system for CENPARMI off-line Arabic handwritten dates collected from Arabic Nationalities. This system consists of modules that segment and recognize an Arabic handwritten date image. First, in the segmentation module, the system explicitly segments a date image into a sequence of basic constituents or segments. As a part of this module, a special sub-module was developed to over-segment any constituent that is a candidate for a touching pair. The proposed touching pair segmentation submodule has been tested on three different datasets of handwritten numeral touching pairs: The CENPARMI Arabic [6], Urdu, and Dari [24] datasets. The final recognition rates of 92.22%, 90.43%, and 86.10% were achieved for Arabic, Urdu and Dari, respectively. Afterwards, the segments are preprocessed and sent to the classification module. In this stage, feature vectors are extracted and then recognized by an isolated numeral classifier. This recognition system has been tested in five different isolated numeral databases: The CENPARMI Arabic [6], Urdu, Dari [24], Farsi, and Pashto databases with overall recognition rates of 97.29% 97.75%, 97.75%, 97.95% and 98.36%, respectively. Finally, a date post processing module is developed to improve the recognition results. This post processing module is used in two different stages. First, in the date stage, to verify that the segmentation/recognition output represents a valid date image and it chooses the best date format to be assigned to this image. Second, in the sub-field stage, to evaluate the values for the date three parts: day, month and year. Experiments on two different databases of Arabic handwritten dates: CENPARMI Arabic database [6] and the CENPARMI Arabic Bank Cheques database [7], show encouraging results with overall recognition rates of 85.05% and 66.49, respectively

    Novel word recognition and word spotting systems for offline Urdu handwriting

    Get PDF
    Word recognition for offline Arabic, Farsi and Urdu handwriting is a subject which has attained much attention in the OCR field. This thesis presents the implementations of offline Urdu Handwritten Word Recognition (HWR) and an Urdu word spotting technique. This thesis first introduces the creation of several offline CENPARMI Urdu databases. These databases were necessary for offline Urdu HWR experiments. The holistic-based recognition approach was followed for the Urdu HWR system. In this system, the basic pre-processing of images was performed. In the feature extraction phase, the gradient and structural features were extracted from greyscale and binary word images, respectively. This recognition system extracted 592 feature sets and these features helped in improving the recognition results. The system was trained and tested on 57 words. Overall, we achieved a 97 % accuracy rate for handwritten word recognition by using the SVM classifier. Our word spotting technique used the holistic HWR system for recognition purposes. This word spotting system consisted of two processes: the segmentation of handwritten connected components and diacritics from Urdu text lines and the word spotting algorithm. A small database of handwritten text pages was created for testing the word spotting system. This database consisted of texts from ten Urdu native speakers. The rule-based segmentation system was applied for segmentation (or extracting) for handwritten Urdu subwords or connected components from text lines. We achieved a 92% correct segmentation rate for 372 text lines. In the word spotting algorithm, the candidate words were generated from the segmented connected components. These candidate words were sent to the holistic HWR system, which extracted the features and tried to recognize each image as one of the 57 words. After classification, each image was sent to the verification/rejection phase, which helped in rejecting the maximum number of unseen (raw data) images. Overall, we achieved a 50% word spotting precision at a 70% recall rat

    A discrete hidden Markov model for the recognition of handwritten Farsi words

    Get PDF
    Handwriting recognition systems (HRS) have been researched for more than 50 years. Designing a system to recognize specific words in a handwritten clean document is still a difficult task and the challenge is to achieve a high recognition rate. Previously, most of the research in the handwriting recognition domain was conducted on Chinese and Latin languages, while recently more people have shown an interest in the Indo-Iranian script recognition systems. In this thesis, we present an automatic handwriting recognition system for Farsi words. The system was trained, validated and tested on the CENPARMI Farsi Dataset, which was gathered during this research. CENPARMI's Farsi Dataset is unique in terms of its huge number of images (432,357 combined grayscale and binary), inclusion of all possible handwriting types (Dates, Words, Isolated Characters, Isolated Digits, Numeral Strings, Special Symbols, Documents), the variety of cursive styles, the number of writers (400) and the exclusive participation of Native Farsi speakers in the gathering of data. The words were first preprocessed. Concavity and Distribution features were extracted and the codebook was calculated by the vector quantization method. A Discrete Hidden Markov Model was chosen as the classifier because of the cursive nature of the Farsi script. Finally, encouraging recognition rates of98.76% and 96.02% have been obtained for the Training and Testing sets, respectivel

    Advanced document data extraction techniques to improve supply chain performance

    Get PDF
    In this thesis, a novel machine learning technique to extract text-based information from scanned images has been developed. This information extraction is performed in the context of scanned invoices and bills used in financial transactions. These financial transactions contain a considerable amount of data that must be extracted, refined, and stored digitally before it can be used for analysis. Converting this data into a digital format is often a time-consuming process. Automation and data optimisation show promise as methods for reducing the time required and the cost of Supply Chain Management (SCM) processes, especially Supplier Invoice Management (SIM), Financial Supply Chain Management (FSCM) and Supply Chain procurement processes. This thesis uses a cross-disciplinary approach involving Computer Science and Operational Management to explore the benefit of automated invoice data extraction in business and its impact on SCM. The study adopts a multimethod approach based on empirical research, surveys, and interviews performed on selected companies.The expert system developed in this thesis focuses on two distinct areas of research: Text/Object Detection and Text Extraction. For Text/Object Detection, the Faster R-CNN model was analysed. While this model yields outstanding results in terms of object detection, it is limited by poor performance when image quality is low. The Generative Adversarial Network (GAN) model is proposed in response to this limitation. The GAN model is a generator network that is implemented with the help of the Faster R-CNN model and a discriminator that relies on PatchGAN. The output of the GAN model is text data with bonding boxes. For text extraction from the bounding box, a novel data extraction framework consisting of various processes including XML processing in case of existing OCR engine, bounding box pre-processing, text clean up, OCR error correction, spell check, type check, pattern-based matching, and finally, a learning mechanism for automatizing future data extraction was designed. Whichever fields the system can extract successfully are provided in key-value format.The efficiency of the proposed system was validated using existing datasets such as SROIE and VATI. Real-time data was validated using invoices that were collected by two companies that provide invoice automation services in various countries. Currently, these scanned invoices are sent to an OCR system such as OmniPage, Tesseract, or ABBYY FRE to extract text blocks and later, a rule-based engine is used to extract relevant data. While the system’s methodology is robust, the companies surveyed were not satisfied with its accuracy. Thus, they sought out new, optimized solutions. To confirm the results, the engines were used to return XML-based files with text and metadata identified. The output XML data was then fed into this new system for information extraction. This system uses the existing OCR engine and a novel, self-adaptive, learning-based OCR engine. This new engine is based on the GAN model for better text identification. Experiments were conducted on various invoice formats to further test and refine its extraction capabilities. For cost optimisation and the analysis of spend classification, additional data were provided by another company in London that holds expertise in reducing their clients' procurement costs. This data was fed into our system to get a deeper level of spend classification and categorisation. This helped the company to reduce its reliance on human effort and allowed for greater efficiency in comparison with the process of performing similar tasks manually using excel sheets and Business Intelligence (BI) tools.The intention behind the development of this novel methodology was twofold. First, to test and develop a novel solution that does not depend on any specific OCR technology. Second, to increase the information extraction accuracy factor over that of existing methodologies. Finally, it evaluates the real-world need for the system and the impact it would have on SCM. This newly developed method is generic and can extract text from any given invoice, making it a valuable tool for optimizing SCM. In addition, the system uses a template-matching approach to ensure the quality of the extracted information

    Spoken content retrieval: A survey of techniques and technologies

    Get PDF
    Speech media, that is, digital audio and video containing spoken content, has blossomed in recent years. Large collections are accruing on the Internet as well as in private and enterprise settings. This growth has motivated extensive research on techniques and technologies that facilitate reliable indexing and retrieval. Spoken content retrieval (SCR) requires the combination of audio and speech processing technologies with methods from information retrieval (IR). SCR research initially investigated planned speech structured in document-like units, but has subsequently shifted focus to more informal spoken content produced spontaneously, outside of the studio and in conversational settings. This survey provides an overview of the field of SCR encompassing component technologies, the relationship of SCR to text IR and automatic speech recognition and user interaction issues. It is aimed at researchers with backgrounds in speech technology or IR who are seeking deeper insight on how these fields are integrated to support research and development, thus addressing the core challenges of SCR

    Biometric Systems

    Get PDF
    Biometric authentication has been widely used for access control and security systems over the past few years. The purpose of this book is to provide the readers with life cycle of different biometric authentication systems from their design and development to qualification and final application. The major systems discussed in this book include fingerprint identification, face recognition, iris segmentation and classification, signature verification and other miscellaneous systems which describe management policies of biometrics, reliability measures, pressure based typing and signature verification, bio-chemical systems and behavioral characteristics. In summary, this book provides the students and the researchers with different approaches to develop biometric authentication systems and at the same time includes state-of-the-art approaches in their design and development. The approaches have been thoroughly tested on standard databases and in real world applications

    Visible, near infrared and thermal hand-based image biometric recognition

    Get PDF
    Biometric Recognition refers to the automatic identification of a person based on his or her anatomical characteristic or modality (i.e., fingerprint, palmprint, face) or behavioural (i.e., signature) characteristic. It is a fundamental key issue in any process concerned with security, shared resources, network transactions among many others. Arises as a fundamental problem widely known as recognition, and becomes a must step before permission is granted. It is supposed that protects key resources by only allowing those resources to be used by users that have been granted authority to use or to have access to them. Biometric systems can operate in verification mode, where the question to be solved is Am I who I claim I am? or in identification mode where the question is Who am I? Scientific community has increased its efforts in order to improve performance of biometric systems. Depending on the application many solutions go in the way of working with several modalities or combining different classification methods. Since increasing modalities require some user inconvenience many of these approaches will never reach the market. For example working with iris, face and fingerprints requires some user effort in order to help acquisition. This thesis addresses hand-based biometric system in a thorough way. The main contributions are in the direction of a new multi-spectral hand-based image database and methods for performance improvement. The main contributions are: A) The first multi-spectral hand-based image database from both hand faces: palmar and dorsal. Biometric database are a precious commodity for research, mainly when it offers something new like visual (VIS), near infrared (NIR) and thermography (TIR) images at a time. This database with a length of 100 users and 10 samples per user constitute a good starting point to check algorithms and hand suitability for recognition. B) In order to correctly deal with raw hand data, some image preprocessing steps are necessary. Three different segmentation phases are deployed to deal with VIS, NIR and TIR images specifically. Some of the tough questions to address: overexposed images, ring fingers and the cuffs, cold finger and noise image. Once image segmented, two different approaches are prepared to deal with the segmented data. These two approaches called: Holistic and Geometric define the main focus to extract the feature vector. These feature vectors can be used alone or can be combined in some way. Many questions can be stated: e.g. which approach is better for recognition?, Can fingers alone obtain better performance than the whole hand? and Is thermography hand information suitable for recognition due to its thermoregulation properties? A complete set of data ready to analyse, coming from the holistic and geometric approach have been designed and saved to test. Some innovative geometric approach related to curvature will be demonstrated. C) Finally the Biometric Dispersion Matcher (BDM) is used in order to explore how it works under different fusion schemes, as well as with different classification methods. It is the intention of this research to contrast what happen when using other methods close to BDM like Linear Discriminant Analysis (LDA). At this point, some interesting questions will be solved, e.g. by taking advantage of the finger segmentation (as five different modalities) to figure out if they can outperform what the whole hand data can teach us.El Reconeixement Biomètric fa referència a la identi cació automàtica de persones fent us d'alguna característica o modalitat anatòmica (empremta digital) o d'alguna característica de comportament (signatura). És un aspecte fonamental en qualsevol procés relacionat amb la seguretat, la compartició de recursos o les transaccions electròniques entre d'altres. És converteix en un pas imprescindible abans de concedir l'autorització. Aquesta autorització, s'entén que protegeix recursos clau, permeten així, que aquests siguin utilitzats pels usuaris que han estat autoritzats a utilitzar-los o a tenir-hi accés. Els sistemes biomètrics poden funcionar en veri cació, on es resol la pregunta: Soc jo qui dic que soc? O en identi cació on es resol la qüestió: Qui soc jo? La comunitat cientí ca ha incrementat els seus esforços per millorar el rendiment dels sistemes biomètrics. En funció de l'aplicació, diverses solucions s'adrecen a treballar amb múltiples modalitats o combinant diferents mètodes de classi cació. Donat que incrementar el número de modalitats, representa a la vegada problemes pels usuaris, moltes d'aquestes aproximacions no arriben mai al mercat. La tesis contribueix principalment en tres grans àrees, totes elles amb el denominador comú següent: Reconeixement biometric a través de les mans. i) La primera d'elles constitueix la base de qualsevol estudi, les dades. Per poder interpretar, i establir un sistema de reconeixement biomètric prou robust amb un clar enfocament a múltiples fonts d'informació, però amb el mínim esforç per part de l'usuari es construeix aquesta Base de Dades de mans multi espectral. Les bases de dades biomètriques constitueixen un recurs molt preuat per a la recerca; sobretot si ofereixen algun element nou com es el cas. Imatges de mans en diferents espectres electromagnètics: en visible (VIS), en infraroig (NIR) i en tèrmic (TIR). Amb un total de 100 usuaris, i 10 mostres per usuari, constitueix un bon punt de partida per estudiar i posar a prova sistemes multi biomètrics enfocats a les mans. ii) El segon bloc s'adreça a les dues aproximacions existents en la literatura per a tractar les dades en brut. Aquestes dues aproximacions, anomenades Holística (tracta la imatge com un tot) i Geomètrica (utilitza càlculs geomètrics) de neixen el focus alhora d'extreure el vector de característiques. Abans de tractar alguna d'aquestes dues aproximacions, però, és necessària l'aplicació de diferents tècniques de preprocessat digital de la imatge per obtenir les regions d'interès desitjades. Diferents problemes presents a les imatges s'han hagut de solucionar de forma original per a cadascuna de les tipologies de les imatges presents: VIS, NIR i TIR. VIS: imatges sobre exposades, anells, mànigues, braçalets. NIR: Ungles pintades, distorsió en forma de soroll en les imatges TIR: Dits freds La segona àrea presenta aspectes innovadors, ja que a part de segmentar la imatge de la ma, es segmenten tots i cadascun dels dits (feature-based approach). Així aconseguim contrastar la seva capacitat de reconeixement envers la ma de forma completa. Addicionalment es presenta un conjunt de procediments geomètrics amb la idea de comparar-los amb els provinents de l'extracció holística. La tercera i última àrea contrasta el procediment de classi cació anomenat Biometric Dispersion Matcher (BDM) amb diferents situacions. La primera relacionada amb l'efectivitat respecte d'altres mètode de reconeixement, com ara l'Anàlisi Lineal Discriminant (LDA) o bé mètodes com KNN o la regressió logística. Les altres situacions que s'analitzen tenen a veure amb múltiples fonts d'informació, quan s'apliquen tècniques de normalització i/o estratègies de combinació (fusió) per millorar els resultats. Els resultats obtinguts no deixen lloc per a la confusió, i són certament prometedors en el sentit que posen a la llum la importància de combinar informació complementària per obtenir rendiments superiors

    Biometric Systems

    Get PDF
    Because of the accelerating progress in biometrics research and the latest nation-state threats to security, this book's publication is not only timely but also much needed. This volume contains seventeen peer-reviewed chapters reporting the state of the art in biometrics research: security issues, signature verification, fingerprint identification, wrist vascular biometrics, ear detection, face detection and identification (including a new survey of face recognition), person re-identification, electrocardiogram (ECT) recognition, and several multi-modal systems. This book will be a valuable resource for graduate students, engineers, and researchers interested in understanding and investigating this important field of study

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
    corecore