3,869 research outputs found

    On-line recognition of English and numerical characters.

    Get PDF
    by Cheung Wai-Hung Wellis.Thesis (M.Sc.)--Chinese University of Hong Kong, 1992.Includes bibliographical references (leaves 52-54).ACKNOWLEDGEMENTSABSTRACTChapter 1 --- INTRODUCTION --- p.1Chapter 1.1 --- CLASSIFICATION OF CHARACTER RECOGNITION --- p.1Chapter 1.2 --- HISTORICAL DEVELOPMENT --- p.3Chapter 1.3 --- RECOGNITION METHODOLOGY --- p.4Chapter 2 --- ORGANIZATION OF THIS REPORT --- p.7Chapter 3 --- DATA SAMPLING --- p.8Chapter 3.1 --- GENERAL CONSIDERATION --- p.8Chapter 3.2 --- IMPLEMENTATION --- p.9Chapter 4 --- PREPROCESSING --- p.10Chapter 4.1 --- GENERAL CONSIDERATION --- p.10Chapter 4.2 --- IMPLEMENTATION --- p.12Chapter 4.2.1 --- Stroke connection --- p.12Chapter 4.2.2 --- Rotation --- p.12Chapter 4.2.3 --- Scaling --- p.14Chapter 4.2.4 --- De-skewing --- p.15Chapter 5 --- STROKE SEGMENTATION --- p.17Chapter 5.1 --- CONSIDERATION --- p.17Chapter 5.2 --- IMPLEMENTATION --- p.20Chapter 6 --- LEARNING --- p.26Chapter 7 --- PROTOTYPE MANAGEMENT --- p.27Chapter 8 --- RECOGNITION --- p.29Chapter 8.1 --- CONSIDERATION --- p.29Chapter 8.1.1 --- Delayed Stroke Tagging --- p.29Chapter 8.1.2 --- Bi-gram --- p.29Chapter 8.1.3 --- Character Scoring --- p.30Chapter 8.1.4 --- Ligature Handling --- p.32Chapter 8.1.5 --- Word Scoring --- p.32Chapter 8.2 --- IMPLEMENTATION --- p.33Chapter 8.2.1 --- Simple Matching --- p.33Chapter 8.2.2 --- Best First Search Matching --- p.33Chapter 8.2.3 --- Multiple Track Method --- p.35Chapter 8.3 --- SYSTEM PERFORMANCE TUNING --- p.37Chapter 9 --- POST-PROCESSING --- p.38Chapter 9.1 --- PROBABILITY MODEL --- p.38Chapter 9.2 --- WORD DICTIONARY APPROACH --- p.39Chapter 10 --- SYSTEM IMPLEMENTATION AND PERFORMANCE --- p.41Chapter 11 --- DISCUSSION --- p.43Chapter 12 --- EPILOG --- p.47Chapter APPENDIX I - --- PROBLEMS ENCOUNTERED AND SUGGESTED ENHANCEMENTS ON THE SYSTEM --- p.48Chapter APPENDIX II - --- GLOSSARIES --- p.51REFERENCES --- p.5

    Advances in Character Recognition

    Get PDF
    This book presents advances in character recognition, and it consists of 12 chapters that cover wide range of topics on different aspects of character recognition. Hopefully, this book will serve as a reference source for academic research, for professionals working in the character recognition field and for all interested in the subject

    Partial shape matching using CCP map and weighted graph transformation matching

    Get PDF
    La détection de la similarité ou de la différence entre les images et leur mise en correspondance sont des problèmes fondamentaux dans le traitement de l'image. Pour résoudre ces problèmes, on utilise, dans la littérature, différents algorithmes d'appariement. Malgré leur nouveauté, ces algorithmes sont pour la plupart inefficaces et ne peuvent pas fonctionner correctement dans les situations d’images bruitées. Dans ce mémoire, nous résolvons la plupart des problèmes de ces méthodes en utilisant un algorithme fiable pour segmenter la carte des contours image, appelée carte des CCPs, et une nouvelle méthode d'appariement. Dans notre algorithme, nous utilisons un descripteur local qui est rapide à calculer, est invariant aux transformations affines et est fiable pour des objets non rigides et des situations d’occultation. Après avoir trouvé le meilleur appariement pour chaque contour, nous devons vérifier si ces derniers sont correctement appariés. Pour ce faire, nous utilisons l'approche « Weighted Graph Transformation Matching » (WGTM), qui est capable d'éliminer les appariements aberrants en fonction de leur proximité et de leurs relations géométriques. WGTM fonctionne correctement pour les objets à la fois rigides et non rigides et est robuste aux distorsions importantes. Pour évaluer notre méthode, le jeu de données ETHZ comportant cinq classes différentes d'objets (bouteilles, cygnes, tasses, girafes, logos Apple) est utilisé. Enfin, notre méthode est comparée à plusieurs méthodes célèbres proposées par d'autres chercheurs dans la littérature. Bien que notre méthode donne un résultat comparable à celui des méthodes de référence en termes du rappel et de la précision de localisation des frontières, elle améliore significativement la précision moyenne pour toutes les catégories du jeu de données ETHZ.Matching and detecting similarity or dissimilarity between images is a fundamental problem in image processing. Different matching algorithms are used in literature to solve this fundamental problem. Despite their novelty, these algorithms are mostly inefficient and cannot perform properly in noisy situations. In this thesis, we solve most of the problems of previous methods by using a reliable algorithm for segmenting image contour map, called CCP Map, and a new matching method. In our algorithm, we use a local shape descriptor that is very fast, invariant to affine transform, and robust for dealing with non-rigid objects and occlusion. After finding the best match for the contours, we need to verify if they are correctly matched. For this matter, we use the Weighted Graph Transformation Matching (WGTM) approach, which is capable of removing outliers based on their adjacency and geometrical relationships. WGTM works properly for both rigid and non-rigid objects and is robust to high order distortions. For evaluating our method, the ETHZ dataset including five diverse classes of objects (bottles, swans, mugs, giraffes, apple-logos) is used. Finally, our method is compared to several famous methods proposed by other researchers in the literature. While our method shows a comparable result to other benchmarks in terms of recall and the precision of boundary localization, it significantly improves the average precision for all of the categories in the ETHZ dataset

    An Approach to Pattern Recognition by Evolutionary Computation

    Get PDF
    Evolutionary Computation has been inspired by the natural phenomena of evolution. It provides a quite general heuristic, exploiting few basic concepts: reproduction of individuals, variation phenomena that affect the likelihood of survival of individuals, inheritance of parents features by offspring. EC has been widely used in the last years to effectively solve hard, non linear and very complex problems. Among the others, EC–based algorithms have also been used to tackle classification problems. Classification is a process according to which an object is attributed to one of a finite set of classes or, in other words, it is recognized as belonging to a set of equal or similar entities, identified by a label. Most likely, the main aspect of classification concerns the generation of prototypes to be used to recognize unknown patterns. The role of prototypes is that of representing patterns belonging to the different classes defined within a given problem. For most of the problems of practical interest, the generation of such prototypes is a very hard problem, since a prototype must be able to represent patterns belonging to the same class, which may be significantly dissimilar each other. They must also be able to discriminate patterns belonging to classes different from the one that they represent. Moreover, a prototype should contain the minimum amount of information required to satisfy the requirements just mentioned. The research presented in this thesis, has led to the definition of an EC–based framework to be used for prototype generation. The defined framework does not provide for the use of any particular kind of prototypes. In fact, it can generate any kind of prototype once an encoding scheme for the used prototypes has been defined. The generality of the framework can be exploited to develop many applications. The framework has been employed to implement two specific applications for prototype generation. The developed applications have been tested on several data sets and the results compared with those obtained by other approaches previously presented in the literature

    Automatic discourse structure generation using rhetorical structure theory

    Get PDF
    This thesis addresses a difficult problem in text processing: creating a System to automatically derive rhetorical structures of text. Although the rhetorical structure has proven to be useful in many fields of text processing such as text summarisation and information extraction, Systems that automatically generate rhetorical structures with high accuracy are difficult to find. This is because discourse is one of the biggest and yet least well defined areas in linguistics. An agreement amongst researchers on the best method for analysing the rhetorical structure of text has not been found. This thesis focuses on investigating a method to generate the rhetorical structures of text. By exploiting different cohesive devices, it proposes a method to recognise rhetorical relations between spans by checking for the appearance of these devices. These factors include cue phrases, noun-phrase cues, verb-phrase cues, reference words, time references, substitution words, ellipses, and syntactic information. The discourse analyser is divided into two levels: sentence-level and text-level. The former uses syntactic information and cue phrases to segment sentences into elementary discourse units and to generate a rhetorical structure for each sentence. The latter derives rhetorical relations between large spans and then replaces each sentence by its corresponding rhetorical structure to produce the rhetorical structure of text. The rhetorical structure at the text-level is derived by selecting rhetorical relations to connect adjacent and non-overlapping spans to form a discourse structure that covers the entire text. Constraints of textual organisation and textual adjacency are effectively used in a beam search to reduce the search space in generating such rhetorical structures. Experiments carried out in this research received 89.4% F-score for the discourse segmentation, 52.4% F-score for the sentence-level discourse analyser and 38.1% F-score for the final output of the System. It shows that this approach provides good performance comparison with current research in discourse

    Privacy-preserving efficient searchable encryption

    Get PDF
    Data storage and computation outsourcing to third-party managed data centers, in environments such as Cloud Computing, is increasingly being adopted by individuals, organizations, and governments. However, as cloud-based outsourcing models expand to society-critical data and services, the lack of effective and independent control over security and privacy conditions in such settings presents significant challenges. An interesting solution to these issues is to perform computations on encrypted data, directly in the outsourcing servers. Such an approach benefits from not requiring major data transfers and decryptions, increasing performance and scalability of operations. Searching operations, an important application case when cloud-backed repositories increase in number and size, are good examples where security, efficiency, and precision are relevant requisites. Yet existing proposals for searching encrypted data are still limited from multiple perspectives, including usability, query expressiveness, and client-side performance and scalability. This thesis focuses on the design and evaluation of mechanisms for searching encrypted data with improved efficiency, scalability, and usability. There are two particular concerns addressed in the thesis: on one hand, the thesis aims at supporting multiple media formats, especially text, images, and multimodal data (i.e. data with multiple media formats simultaneously); on the other hand the thesis addresses client-side overhead, and how it can be minimized in order to support client applications executing in both high-performance desktop devices and resource-constrained mobile devices. From the research performed to address these issues, three core contributions were developed and are presented in the thesis: (i) CloudCryptoSearch, a middleware system for storing and searching text documents with privacy guarantees, while supporting multiple modes of deployment (user device, local proxy, or computational cloud) and exploring different tradeoffs between security, usability, and performance; (ii) a novel framework for efficiently searching encrypted images based on IES-CBIR, an Image Encryption Scheme with Content-Based Image Retrieval properties that we also propose and evaluate; (iii) MIE, a Multimodal Indexable Encryption distributed middleware that allows storing, sharing, and searching encrypted multimodal data while minimizing client-side overhead and supporting both desktop and mobile devices

    Information Preserving Processing of Noisy Handwritten Document Images

    Get PDF
    Many pre-processing techniques that normalize artifacts and clean noise induce anomalies due to discretization of the document image. Important information that could be used at later stages may be lost. A proposed composite-model framework takes into account pre-printed information, user-added data, and digitization characteristics. Its benefits are demonstrated by experiments with statistically significant results. Separating pre-printed ruling lines from user-added handwriting shows how ruling lines impact people\u27s handwriting and how they can be exploited for identifying writers. Ruling line detection based on multi-line linear regression reduces the mean error of counting them from 0.10 to 0.03, 6.70 to 0.06, and 0.13 to 0.02, com- pared to an HMM-based approach on three standard test datasets, thereby reducing human correction time by 50%, 83%, and 72% on average. On 61 page images from 16 rule-form templates, the precision and recall of form cell recognition are increased by 2.7% and 3.7%, compared to a cross-matrix approach. Compensating for and exploiting ruling lines during feature extraction rather than pre-processing raises the writer identification accuracy from 61.2% to 67.7% on a 61-writer noisy Arabic dataset. Similarly, counteracting page-wise skew by subtracting it or transforming contours in a continuous coordinate system during feature extraction improves the writer identification accuracy. An implementation study of contour-hinge features reveals that utilizing the full probabilistic probability distribution function matrix improves the writer identification accuracy from 74.9% to 79.5%

    Disaster Data Management in Cloud Environments

    Get PDF
    Facilitating decision-making in a vital discipline such as disaster management requires information gathering, sharing, and integration on a global scale and across governments, industries, communities, and academia. A large quantity of immensely heterogeneous disaster-related data is available; however, current data management solutions offer few or no integration capabilities and limited potential for collaboration. Moreover, recent advances in cloud computing, Big Data, and NoSQL have opened the door for new solutions in disaster data management. In this thesis, a Knowledge as a Service (KaaS) framework is proposed for disaster cloud data management (Disaster-CDM) with the objectives of 1) facilitating information gathering and sharing, 2) storing large amounts of disaster-related data from diverse sources, and 3) facilitating search and supporting interoperability and integration. Data are stored in a cloud environment taking advantage of NoSQL data stores. The proposed framework is generic, but this thesis focuses on the disaster management domain and data formats commonly present in that domain, i.e., file-style formats such as PDF, text, MS Office files, and images. The framework component responsible for addressing simulation models is SimOnto. SimOnto, as proposed in this work, transforms domain simulation models into an ontology-based representation with the goal of facilitating integration with other data sources, supporting simulation model querying, and enabling rule and constraint validation. Two case studies presented in this thesis illustrate the use of Disaster-CDM on the data collected during the Disaster Response Network Enabled Platform (DR-NEP) project. The first case study demonstrates Disaster-CDM integration capabilities by full-text search and querying services. In contrast to direct full-text search, Disaster-CDM full-text search also includes simulation model files as well as text contained in image files. Moreover, Disaster-CDM provides querying capabilities and this case study demonstrates how file-style data can be queried by taking advantage of a NoSQL document data store. The second case study focuses on simulation models and uses SimOnto to transform proprietary simulation models into ontology-based models which are then stored in a graph database. This case study demonstrates Disaster-CDM benefits by showing how simulation models can be queried and how model compliance with rules and constraints can be validated

    Template Based Recognition of On-Line Handwriting

    Get PDF
    Software for recognition of handwriting has been available for several decades now and research on the subject have produced several different strategies for producing competitive recognition accuracies, especially in the case of isolated single characters. The problem of recognizing samples of handwriting with arbitrary connections between constituent characters (emph{unconstrained handwriting}) adds considerable complexity in form of the segmentation problem. In other words a recognition system, not constrained to the isolated single character case, needs to be able to recognize where in the sample one letter ends and another begins. In the research community and probably also in commercial systems the most common technique for recognizing unconstrained handwriting compromise Neural Networks for partial character matching along with Hidden Markov Modeling for combining partial results to string hypothesis. Neural Networks are often favored by the research community since the recognition functions are more or less automatically inferred from a training set of handwritten samples. From a commercial perspective a downside to this property is the lack of control, since there is no explicit information on the types of samples that can be correctly recognized by the system. In a template based system, each style of writing a particular character is explicitly modeled, and thus provides some intuition regarding the types of errors (confusions) that the system is prone to make. Most template based recognition methods today only work for the isolated single character recognition problem and extensions to unconstrained recognition is usually not straightforward. This thesis presents a step-by-step recipe for producing a template based recognition system which extends naturally to unconstrained handwriting recognition through simple graph techniques. A system based on this construction has been implemented and tested for the difficult case of unconstrained online Arabic handwriting recognition with good results
    corecore