1,739 research outputs found

    A Font Search Engine for Large Font Databases

    Get PDF
    A search engine for font recognition is presented and evaluated. The intended usage is the search in very large font databases. The input to the search engine is an image of a text line, and the output is the name of the font used when rendering the text. After pre-processing and segmentation of the input image, a local approach is used, where features are calculated for individual characters. The method is based on eigenimages calculated from edge filtered character images, which enables compact feature vectors that can be computed rapidly. In this study the database contains 2763 different fonts for the English alphabet. To resemble a real life situation, the proposed method is evaluated with printed and scanned text lines and character images. Our evaluation shows that for 99.1% of the queries, the correct font name can be found within the five best matches

    Reconocimiento óptico de fuentes en inglés en documentos de imágenes utilizando eigenfaces

    Get PDF
    Introduction: In this paper, a system for recognizing fonts has been designed and implemented. The system is based on the Eigenfaces method. Because font recognition works in conjunction with other methods like Optical Character Recognition (OCR), we used Decapod and OCRopus software as a framework to present the method. Materials and Methods: In our experiments, text typeset with three English fonts (Comic Sans MS, DejaVu Sans Condensed,Times New Roman) have been used. Results and Discussion: The system is tested thoroughly using synthetic and degraded data. The experimental results show that Eigenfaces algorithm is very good at recognizing fonts of synthetic clean data as well as degraded data. The correct recognition rate for synthetic data for Eigenfaces is 99% based on Euclidean Distance. The overall accuracy of Eigenfaces is 97% based on 6144 degraded samples and considering Euclidean Distance performance criterion. Conclusions: It is concluded from the experimental results that the Eigenfaces method is suitable for font recognition of degraded documents. The three percentage incorrect classification can be mediated by relying on intra-word font information

    Sparse Radial Sampling LBP for Writer Identification

    Full text link
    In this paper we present the use of Sparse Radial Sampling Local Binary Patterns, a variant of Local Binary Patterns (LBP) for text-as-texture classification. By adapting and extending the standard LBP operator to the particularities of text we get a generic text-as-texture classification scheme and apply it to writer identification. In experiments on CVL and ICDAR 2013 datasets, the proposed feature-set demonstrates State-Of-the-Art (SOA) performance. Among the SOA, the proposed method is the only one that is based on dense extraction of a single local feature descriptor. This makes it fast and applicable at the earliest stages in a DIA pipeline without the need for segmentation, binarization, or extraction of multiple features.Comment: Submitted to the 13th International Conference on Document Analysis and Recognition (ICDAR 2015

    Classification of Typed Characters Using Backpropagation Neural Network

    Get PDF
    This thesis concentrates on classification of typed characters using a neural network. Recognition of typed or printed characters using intelligent methods like neural network has found much application in the recent decades. The ability of moment invariants to represent characters independent of position, size and orientation have caused them to be proposed as pattern sensitive features in classification and recognition of these characters. In this research, uppercase English characters is represented by invariant features derived using functions of regular moments, namely Hu invariants. Moments up to the third order have been used for the recognition of these typed characters. A single layer perceptron artificial neural network trained by the backpropagation algorithm is used to classify these characters into their respective categories. Experimental study conducted with three different fonts commonly used in word processing applications shows good classification results. Some suggestions for further work in this area have also been presented

    COMPARATIVE STUDY OF FONT RECOGNITION USING CONVOLUTIONAL NEURAL NETWORKS AND TWO FEATURE EXTRACTION METHODS WITH SUPPORT VECTOR MACHINE

    Get PDF
    Font recognition is one of the essential issues in document recognition and analysis, and is frequently a complex and time-consuming process. Many techniques of optical character recognition (OCR) have been suggested and some of them have been marketed, however, a few of these techniques considered font recognition. The issue of OCR is that it saves copies of documents to make them searchable, but the documents stop having the original appearance. To solve this problem, this paper presents a system for recognizing three and six English fonts from character images using Convolution Neural Network (CNN), and then compare the results of proposed system with the two studies. The first study used NCM features and SVM as a classification method, and the second study used DP features and SVM as classification method. The data of this study were taken from Al-Khaffaf dataset [21]. The two types of datasets have been used: the first type is about 27,620 sample for the three fonts classification and the second type is about 72,983 sample for the six fonts classification and both datasets are English character images in gray scale format with 8 bits. The results showed that CNN achieved the highest recognition rate in the proposed system compared with the two studies reached 99.75% and 98.329 % for the three and six fonts recognition, respectively. In addition, CNN got the least time required for creating model about 6 minutes and 23- 24 minutes for three and six fonts recognition, respectively. Based on the results, we can conclude that CNN technique is the best and most accurate model for recognizing fonts

    Rotation-invariant features for multi-oriented text detection in natural images.

    Get PDF
    Texts in natural scenes carry rich semantic information, which can be used to assist a wide range of applications, such as object recognition, image/video retrieval, mapping/navigation, and human computer interaction. However, most existing systems are designed to detect and recognize horizontal (or near-horizontal) texts. Due to the increasing popularity of mobile-computing devices and applications, detecting texts of varying orientations from natural images under less controlled conditions has become an important but challenging task. In this paper, we propose a new algorithm to detect texts of varying orientations. Our algorithm is based on a two-level classification scheme and two sets of features specially designed for capturing the intrinsic characteristics of texts. To better evaluate the proposed method and compare it with the competing algorithms, we generate a comprehensive dataset with various types of texts in diverse real-world scenes. We also propose a new evaluation protocol, which is more suitable for benchmarking algorithms for detecting texts in varying orientations. Experiments on benchmark datasets demonstrate that our system compares favorably with the state-of-the-art algorithms when handling horizontal texts and achieves significantly enhanced performance on variant texts in complex natural scenes
    corecore