262 research outputs found

    Hilbert's Program Then and Now

    Get PDF
    Hilbert's program was an ambitious and wide-ranging project in the philosophy and foundations of mathematics. In order to "dispose of the foundational questions in mathematics once and for all, "Hilbert proposed a two-pronged approach in 1921: first, classical mathematics should be formalized in axiomatic systems; second, using only restricted, "finitary" means, one should give proofs of the consistency of these axiomatic systems. Although Godel's incompleteness theorems show that the program as originally conceived cannot be carried out, it had many partial successes, and generated important advances in logical theory and meta-theory, both at the time and since. The article discusses the historical background and development of Hilbert's program, its philosophical underpinnings and consequences, and its subsequent development and influences since the 1930s.Comment: 43 page

    Updating the “abstract–concrete” distinction in Ancient Near Eastern numbers

    Get PDF
    The characterization of early token-based accounting using a concrete concept of number, later numerical notations an abstract one, has become well entrenched in the literature. After reviewing its history and assumptions, this article challenges the abstract–concrete distinction, presenting an alternative view of change in Ancient Near Eastern number concepts, wherein numbers are abstract from their inception and materially bound when most elaborated. The alternative draws on the chronological sequence of material counting technologies used in the Ancient Near East—fingers, tallies, tokens, and numerical notations—as reconstructed through archaeological and textual evidence and as interpreted through Material Engagement Theory, an extended-mind framework in which materiality plays an active role (Malafouris 2013)

    Spotting Keywords in Offline Handwritten Documents Using Hausdorff Edit Distance

    Get PDF
    Keyword spotting has become a crucial topic in handwritten document recognition, by enabling content-based retrieval of scanned documents using search terms. With a query keyword, one can search and index the digitized handwriting which in turn facilitates understanding of manuscripts. Common automated techniques address the keyword spotting problem through statistical representations. Structural representations such as graphs apprehend the complex structure of handwriting. However, they are rarely used, particularly for keyword spotting techniques, due to high computational costs. The graph edit distance, a powerful and versatile method for matching any type of labeled graph, has exponential time complexity to calculate the similarities of graphs. Hence, the use of graph edit distance is constrained to small size graphs. The recently developed Hausdorff edit distance algorithm approximates the graph edit distance with quadratic time complexity by efficiently matching local substructures. This dissertation speculates using Hausdorff edit distance could be a promising alternative to other template-based keyword spotting approaches in term of computational time and accuracy. Accordingly, the core contribution of this thesis is investigation and development of a graph-based keyword spotting technique based on the Hausdorff edit distance algorithm. The high representational power of graphs combined with the efficiency of the Hausdorff edit distance for graph matching achieves remarkable speedup as well as accuracy. In a comprehensive experimental evaluation, we demonstrate the solid performance of the proposed graph-based method when compared with state of the art, both, concerning precision and speed. The second contribution of this thesis is a keyword spotting technique which incorporates dynamic time warping and Hausdorff edit distance approaches. The structural representation of graph-based approach combined with statistical geometric features representation compliments each other in order to provide a more accurate system. The proposed system has been extensively evaluated with four types of handwriting graphs and geometric features vectors on benchmark datasets. The experiments demonstrate a performance boost in which outperforms individual systems

    Off-line Arabic Handwriting Recognition System Using Fast Wavelet Transform

    Get PDF
    In this research, off-line handwriting recognition system for Arabic alphabet is introduced. The system contains three main stages: preprocessing, segmentation and recognition stage. In the preprocessing stage, Radon transform was used in the design of algorithms for page, line and word skew correction as well as for word slant correction. In the segmentation stage, Hough transform approach was used for line extraction. For line to words and word to characters segmentation, a statistical method using mathematic representation of the lines and words binary image was used. Unlike most of current handwriting recognition system, our system simulates the human mechanism for image recognition, where images are encoded and saved in memory as groups according to their similarity to each other. Characters are decomposed into a coefficient vectors, using fast wavelet transform, then, vectors, that represent a character in different possible shapes, are saved as groups with one representative for each group. The recognition is achieved by comparing a vector of the character to be recognized with group representatives. Experiments showed that the proposed system is able to achieve the recognition task with 90.26% of accuracy. The system needs only 3.41 seconds a most to recognize a single character in a text of 15 lines where each line has 10 words on average

    Music Encoding Conference Proceedings

    Get PDF
    UIDB/00693/2020 UIDP/00693/2020publishersversionpublishe

    An intelligent framework for pre-processing ancient Thai manuscripts on palm leaves

    Get PDF
    In Thailand’s early history, prior to the availability of paper and printing technologies, palm leaves were used to record information written by hand. These ancient documents contain invaluable knowledge. By digitising the manuscripts, the content can be preserved and made widely available to the interested community via electronic media. However, the content is difficult to access or retrieve. In order to extract relevant information from the document images efficiently, each step of the process requires reduction of irrelevant data such as noise or interference on the images. The pre-processing techniques serve the purpose of extracting regions of interest, reducing noise from the image and degrading the irrelevant background. The image can then be directly and efficiently processed for feature selection and extraction prior to the subsequent phase of character recognition. It is therefore the main objective of this study to develop an efficient and intelligent image preprocessing system that could be used to extract components from ancient manuscripts for information extraction and retrieval purposes. The main contributions of this thesis are the provision and enhancement of the region of interest by using an intelligent approach for the pre-processing of ancient Thai manuscripts on palm leaves and a detailed examination of the preprocessing techniques for palm leaf manuscripts. As noise reduction and binarisation are involved in the first step of pre-processing to eliminate noise and background from image documents, it is necessary for this step to provide a good quality output; otherwise, the accuracy of the subsequent stages will be affected. In this work, an intelligent approach to eliminate background was proposed and carried out by a selection of appropriate binarisation techniques using SVM. As there could be multiple binarisation techniques of choice, another approach was proposed to eliminate the background in this study in order to generate an optimal binarised image. The proposal is an ensemble architecture based on the majority vote scheme utilising local neighbouring information around a pixel of interest. To extract text from that binarised image, line segmentation was then applied based on the partial projection method as this method provides good results with slant texts and connected components. To improve the quality of the partial projection method, an Adaptive Partial Projection (APP) method was proposed. This technique adjusts the size of a character strip automatically by adapting the width of the strip to separate the connected component of consecutive lines through divide and conquer, and analysing the upper vowels and lower vowels of the text line. Finally, character segmentation was proposed using a hierarchical segmentation technique based on a contour-tracing algorithm. Touching components identified from the previous step were then separated by a trace of the background skeletons, and a combined method of segmentation. The key datasets used in this study are images provided by the Project for Palm Leaf Preservation, Northeastern Thailand Division, and benchmark datasets from the Document Image Binarisation Contest (DIBCO) series are used to compare the results of this work against other binarisation techniques. The experimental results have shown that the proposed methods in this study provide superior performance and will be used to support subsequent processing of the Thai ancient palm leaf documents. It is expected that the contributions from this study will also benefit research work on ancient manuscripts in other languages

    Feature design and lexicon reduction for efficient offline handwriting recognition

    Get PDF
    This thesis establishes a pattern recognition framework for offline word recognition systems. It focuses on the image level features because they greatly influence the recognition performance. In particular, we consider two complementary aspects of prominent features impact: lexicon reduction and the actual recognition. The first aspect, lexicon reduction, consists in the design of a weak classifier which outputs a set of candidate word hypotheses given a word image. Its main purpose is to reduce the recognition computational time while maintaining (or even improving) the recognition rate. The second aspect is the actual recognition system itself. In fact, several features exist in the literature based on different fields of research, but no consensus exists concerning the most promising ones. The goal of the proposed framework is to improve our understanding of relevant features in order to build better recognition systems. For this purpose, we addressed two specific problems: 1) feature design for lexicon reduction (application to Arabic script), and 2) feature evaluation for cursive handwriting recognition (application to Latin and Arabic scripts). Few methods exist for lexicon reduction in Arabic script, unlike Latin script. Existing methods use salient features of Arabic words such as the number of subwords and diacritics, but totally ignore the shape of the subwords. Therefore, our first goal is to perform lexicon reductionn based on subwords shape. Our approach is based on shape indexing, where the shape of a query subword is compared to a labeled database of sample subwords. For efficient comparison with a low computational overhead, we proposed the weighted topological signature vector (W-TSV) framework, where the subword shape is modeled as a weighted directed acyclic graph (DAG) from which the W-TSV vector is extracted for efficient indexing. The main contributions of this work are to extend the existing TSV framework to weighted DAG and to propose a shape indexing approach for lexicon reduction. Good performance for lexicon reduction is achieved for Arabic subwords. Nevertheless, the performance remains modest for Arabic words. Considering the results of our first work on Arabic lexicon reduction, we propose to build a new index for better performance at the word level. The subword shape and the number of subwords and diacritics are all important components of Arabic word shape. We therefore propose the Arabic word descriptor (AWD) which integrates all the aforementioned components. It is built in two steps. First, a structural descriptor (SD) is computed for each connected component (CC) of the word image. It describes the CC shape using the bag-of-words model, where each visual word represents a different local shape structure. Then, the AWD is formed by concatenating the SDs using an efficient heuristic, implicitly discriminating between subwords and diacritics. In the context of lexicon reduction, the AWD is used to index a reference database. The main contribution of this work is the design of the AWD, which integrates lowlevel cues (subword shape structure) and symbolic information (subword counts and diacritics) into a single descriptor. The proposed method has a low computational overhead, it is simple to implement and it provides state-of-the-art performance for lexicon reduction on two Arabic databases, namely the Ibn Sina database of subwords and the IFN/ENIT database of words. The last part of this thesis focuses on features for word recognition. A large body of features exist in the literature, each of them being motivated by different fields, such as pattern recognition, computer vision or machine learning. Identifying the most promising approaches would improve the design of the next generation of features. Nevertheless, because they are based on different concepts, it is difficult to compare them on a theoretical ground and efficient empirical tools are needed. Therefore, the last objective of the thesis is to provide a method for feature evaluation that assesses the strength and complementarity of existing features. A combination scheme has been designed for this purpose, in which each feature is evaluated through a reference recognition system, based on recurrent neural networks. More precisely, each feature is represented by an agent, which is an instance of the recognition system trained with that feature. The decisions of all the agents are combined using a weighted vote. The weights are jointly optimized during a training phase in order to increase the weighted vote of the true word label. Therefore, they reflect the strength and complementarity of the agents and their features for the given task. Finally, they are converted into a numerical score assigned to each feature, which is easy to interpret under this combination model. To the best of our knowledge, this is the first feature evaluation method able to quantify the importance of each feature, instead of providing a ranking based on the recognition rate. Five state-of-the-art features have been tested, and our results provide interesting insight for future feature design

    A novel approach to handwritten character recognition

    Get PDF
    A number of new techniques and approaches for off-line handwritten character recognition are presented which individually make significant advancements in the field. First. an outline-based vectorization algorithm is described which gives improved accuracy in producing vector representations of the pen strokes used to draw characters. Later. Vectorization and other types of preprocessing are criticized and an approach to recognition is suggested which avoids separate preprocessing stages by incorporating them into later stages. Apart from the increased speed of this approach. it allows more effective alteration of the character images since more is known about them at the later stages. It also allows the possibility of alterations being corrected if they are initially detrimental to recognition. A new feature measurement. the Radial Distance/Sector Area feature. is presented which is highly robust. tolerant to noise. distortion and style variation. and gives high accuracy results when used for training and testing in a statistical or neural classifier. A very powerful classifier is therefore obtained for recognizing correctly segmented characters. The segmentation task is explored in a simple system of integrated over-segmentation. Character classification and approximate dictionary checking. This can be extended to a full system for handprinted word recognition. In addition to the advancements made by these methods. a powerful new approach to handwritten character recognition is proposed as a direction for future research. This proposal combines the ideas and techniques developed in this thesis in a hierarchical network of classifier modules to achieve context-sensitive. off-line recognition of handwritten text. A new type of "intelligent" feedback is used to direct the search to contextually sensible classifications. A powerful adaptive segmentation system is proposed which. when used as the bottom layer in the hierarchical network. allows initially incorrect segmentations to be adjusted according to the hypotheses of the higher level context modules

    Geometric correction of historical Arabic documents

    Get PDF
    Geometric deformations in historical documents significantly influence the success of both Optical Character Recognition (OCR) techniques and human readability. They may have been introduced at any time during the life cycle of a document, from when it was first printed to the time it was digitised by an imaging device. This Thesis focuses on the challenging domain of geometric correction of Arabic historical documents, where background research has highlighted that existing approaches for geometric correction of Latin-script historical documents are not sensitive to the characteristics of text in Arabic documents and therefore cannot be applied successfully. Text line segmentation and baseline detection algorithms have been investigated to propose a new more suitable one for warped Arabic historical document images. Advanced ideas for performing dewarping and geometric restoration on historical Arabic documents, as dictated by the specific characteristics of the problem have been implemented.In addition to developing an algorithm to detect accurate baselines of historical printed Arabic documents the research also contributes a new dataset consisting of historical Arabic documents with different degrees of warping severity.Overall, a new dewarping system, the first for Historical Arabic documents, has been developed taking into account both global and local features of the text image and the patterns of the smooth distortion between text lines. By using the results of the proposed line segmentation and baseline detection methods, it can cope with a variety of distortions, such as page curl, arbitrary warping and fold
    • …
    corecore