1,061 research outputs found

    Freeform User Interfaces for Graphical Computing

    Get PDF
    報告番号: 甲15222 ; 学位授与年月日: 2000-03-29 ; 学位の種別: 課程博士 ; 学位の種類: 博士(工学) ; 学位記番号: 博工第4717号 ; 研究科・専攻: 工学系研究科情報工学専

    Standardizing, Segmenting and Tenderizing Letters and Improving the Quality of Envelope Images to Extract Postal Addresses

    Get PDF
    In most mechanized postal systems, envelopes are scanned based on the postal standard using mechanical instruments. In the standard format, the image of envelopes lacks tilts, lines are along the horizontal axis and words are placed in a correct and non-oblique manner. In this article a new algorithm for rotating, segmentation and Tenderizing Letters for standardizing and increasing the quality of an envelope has been presented, which can be used in all text identification systems as three successful pre-processing algorithms. In the algorithm proposed, letters with any forms and tilts during scanning were rotated and standardized by applying a simple two-step algorithm based on what was written on the envelope without requiring the calculation of tilt angle. After standardization, the main regions of the image were specified using the histogram information. Then, in a simple algorithm, the candidate points from the pixels related to the text on the envelope were selected and quality improvement and tenderization were done on the main regions of the image. The advantaged of the proposed algorithm included No need for additional mechanical equipment, less calculation, simplicity and consideration of the structure of words on the envelope in all preprocessing phases.DOI:http://dx.doi.org/10.11591/ijece.v2i3.34

    A visual approach to sketched symbol recognition

    Get PDF
    There is increasing interest in building systems that can automatically interpret hand-drawn sketches. However, many challenges remain in terms of recognition accuracy, robustness to different drawing styles, and ability to generalize across multiple domains. To address these challenges, we propose a new approach to sketched symbol recognition that focuses on the visual appearance of the symbols. This allows us to better handle the range of visual and stroke-level variations found in freehand drawings. We also present a new symbol classifier that is computationally efficient and invariant to rotation and local deformations. We show that our method exceeds state-of-the-art performance on all three domains we evaluated, including handwritten digits, PowerPoint shapes, and electrical circuit symbols

    2D Grammar Extension of the CMP Mathematical Formulae On-line Recognition System

    Get PDF
    Projecte realitzat en col.laboració amb Czech Technical University in PragueIn the last years, the recognition of handwritten mathematical formulae has recieved an increasing amount of attention in pattern recognition research. However, the diversity of approaches to the problem and the lack of a commercially viable system indicate that there is still much research to be done in this area. In this thesis, I will describe the previous work on a system for on-line handwritten mathematical formulae recognition based on the structural construction paradigm and two-dimensional grammars. In general, this approach can be successfully used in the anaylysis of inputs composed of objects that exhibit rich structural relations. An important benefit of the structural construction is in not treating symbols segmentation and structural anaylsis as two separate processes which allows the system to perform segmentation in the context of the whole formula structure, helping to solve arising ambiguities more reliably. We explore the opening provided by the polynomial complexity parsing algorithm and extend the grammar by many new grammar production rules which made the system useful for formulae met in the real world. We propose several grammar extensions to support a wide range of real mathematical formulae, as well as new features implemented in the application. Our current approach can recognize functions, limits, derivatives, binomial coefficients, complex numbers and more

    Pen-based Methods For Recognition and Animation of Handwritten Physics Solutions

    Get PDF
    There has been considerable interest in constructing pen-based intelligent tutoring systems due to the natural interaction metaphor and low cognitive load afforded by pen-based interaction. We believe that pen-based intelligent tutoring systems can be further enhanced by integrating animation techniques. In this work, we explore methods for recognizing and animating sketched physics diagrams. Our methodologies enable an Intelligent Tutoring System (ITS) to understand the scenario and requirements posed by a given problem statement and to couple this knowledge with a computational model of the student\u27s handwritten solution. These pieces of information are used to construct meaningful animations and feedback mechanisms that can highlight errors in student solutions. We have constructed a prototype ITS that can recognize mathematics and diagrams in a handwritten solution and infer implicit relationships among diagram elements, mathematics and annotations such as arrows and dotted lines. We use natural language processing to identify the domain of a given problem, and use this information to select one or more of four domain-specific physics simulators to animate the user\u27s sketched diagram. We enable students to use their answers to guide animation behavior and also describe a novel algorithm for checking recognized student solutions. We provide examples of scenarios that can be modeled using our prototype system and discuss the strengths and weaknesses of our current prototype. Additionally, we present the findings of a user study that aimed to identify animation requirements for physics tutoring systems. We describe a taxonomy for categorizing different types of animations for physics problems and highlight how the taxonomy can be used to define requirements for 50 physics problems chosen from a university textbook. We also present a discussion of 56 handwritten solutions acquired from physics students and describe how suitable animations could be constructed for each of them

    Bayesian hierarchical modeling for the forensic evaluation of handwritten documents

    Get PDF
    The analysis of handwritten evidence has been used widely in courts in the United States since the 1930s (Osborn, 1946). Traditional evaluations are conducted by trained forensic examiners. More recently, there has been a movement toward objective and probability-based evaluation of evidence, and a variety of governing bodies have made explicit calls for research to support the scientific underpinnings of the field (National Research Council, 2009; President\u27s Council of Advisors on Science and Technology (US), 2016; National Institutes of Standards and Technology). This body of work makes contributions to help satisfy those needs for the evaluation of handwritten documents. We develop a framework to evaluate a questioned writing sample against a finite set of genuine writing samples from known sources. Our approach is fully automated, reducing the opportunity for cognitive biases to enter the analysis pipeline through regular examiner intervention. Our methods are able to handle all writing styles together, and result in estimated probabilities of writership based on parametric modeling. We contribute open-source datasets, code, and algorithms. A document is prepared for the evaluation processed by first being scanned and stored as an image file. The image is processed and the text within is decomposed into a sequence of disjoint graphical structures. The graphs serve as the smallest unit of writing we will consider, and features extracted from them are used as data for modeling. Chapter 2 describes the image processing steps and introduces a distance measure for the graphs. The distance measure is used in a K-means clustering algorithm (Forgy, 1965; Lloyd, 1982; Gan and Ng, 2017), which results in a clustering template with 40 exemplar structures. The primary feature we extract from each graph is a cluster assignment. We do so by comparing each graph to the template and making assignments based on the exemplar to which each graph is most similar in structure. The cluster assignment feature is used for a writer identification exercise using a Bayesian hierarchical model on a small set of 27 writers. In Chapter 3 we incorporate new data sources and a larger number of writers in the clustering algorithm to produce an updated template. A mixture component is added to the hierarchical model and we explore the relationship between a writer\u27s estimated mixing parameter and their writing style. In Chapter 4 we expand the hierarchical model to include other graph-based features, in addition to cluster assignments. We incorporate an angular feature with support on the polar coordinate system into the hierarchical modeling framework using a circular probability density function. The new model is applied and tested in three applications

    A Novel Method to Detect Segmentation points of Arabic Words using Peaks and Neural Network

    Get PDF
    Many methods of segmentation using detection of segmentation points or where the location of segmentation points is expected before the segmentation process,  the validity of segmentation points is verified by using ANNs. In this paper apply a novel method to detect correctly of location segmentation points by detect of peaks with neural networks for Arabic word. This method employs baseline and peaks identification; where using two steps to segmenting text. Where peaks identification function is applied which at the subword segment level to frame the minimum and maximum peaks, and baseline detection. Where these two steps have led to the best result through the model depends on minimum peaks attained by utilising a stroke operator with a view to extracting potential points of segmentation, and determining the baseline procedure was developed to approximate the parameters. Where this method has yielded highly accurate positive results for Arabic characters’ segmentation with four kinds of handwritten datasets as AHDB, IFN-ENIT, AHDB-FTR and ACDAR. Earlier results showed that the use of EDMS to MLP_ANN gives better results than GLCM and MOMENT in different groups and gives results of EDMS features on MNN with an accuracy level of 95.09% classifier for IFN-ENIT set of data

    Adaptive Algorithms for Automated Processing of Document Images

    Get PDF
    Large scale document digitization projects continue to motivate interesting document understanding technologies such as script and language identification, page classification, segmentation and enhancement. Typically, however, solutions are still limited to narrow domains or regular formats such as books, forms, articles or letters and operate best on clean documents scanned in a controlled environment. More general collections of heterogeneous documents challenge the basic assumptions of state-of-the-art technology regarding quality, script, content and layout. Our work explores the use of adaptive algorithms for the automated analysis of noisy and complex document collections. We first propose, implement and evaluate an adaptive clutter detection and removal technique for complex binary documents. Our distance transform based technique aims to remove irregular and independent unwanted foreground content while leaving text content untouched. The novelty of this approach is in its determination of best approximation to clutter-content boundary with text like structures. Second, we describe a page segmentation technique called Voronoi++ for complex layouts which builds upon the state-of-the-art method proposed by Kise [Kise1999]. Our approach does not assume structured text zones and is designed to handle multi-lingual text in both handwritten and printed form. Voronoi++ is a dynamically adaptive and contextually aware approach that considers components' separation features combined with Docstrum [O'Gorman1993] based angular and neighborhood features to form provisional zone hypotheses. These provisional zones are then verified based on the context built from local separation and high-level content features. Finally, our research proposes a generic model to segment and to recognize characters for any complex syllabic or non-syllabic script, using font-models. This concept is based on the fact that font files contain all the information necessary to render text and thus a model for how to decompose them. Instead of script-specific routines, this work is a step towards a generic character and recognition scheme for both Latin and non-Latin scripts

    Orchestrating Cultural Heritage:Exploring the Automated Analysis and Organization of Charles S. Peirce's PAP Manuscript

    Get PDF
    This preliminary study introduces an innovative approach to the analysis and organization of cultural heritage materials, focusing on the archive of Charles S. Peirce. Given the diverse range of artifacts, objects, and documents comprising cultural heritage, it is essential to efficiently organize and provide access to these materials for the wider public. However, Peirce's manuscripts pose a particular challenge due to their extensive quantity, which makes comprehensive organization through manual classification practically impossible. In response to this challenge, our paper proposes a methodology for the automated analysis and organization of Peirce's manuscripts. We have specifically tested this approach on the renowned 115-page manuscript known as PAP. This study represents a significant step forward in establishing a research direction for the development of a larger project. By incorporating novel computational methods, this larger project has the potential to greatly enhance the field of cultural heritage organization
    corecore