915 research outputs found

    Error-correcting codes and applications to large scale classification systems

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Includes bibliographical references (p. 37-39).In this thesis, we study the performance of distributed output coding (DOC) and error-Correcting output coding (ECOC) as potential methods for expanding the class of tractable machine-learning problems. Using distributed output coding, we were able to scale a neural-network-based algorithm to handle nearly 10,000 output classes. In particular, we built a prototype OCR engine for Devanagari and Korean texts based upon distributed output coding. We found that the resulting classifiers performed better than existing algorithms, while maintaining small size. Error-correction, however, was found to be ineffective at increasing the accuracy of the ensemble. For each language, we also tested the feasibility of automatically finding a good codebook. Unfortunately, the results in this direction were primarily negative.by Jeremy Scott Hurwitz.M.Eng

    Named Entity Recognition in multilingual handwritten texts

    Full text link
    [ES] En nuestro trabajo presentamos un único modelo basado en aprendizaje profundo para la transcripción automática y el reconocimiento de entidades nombradas de textos manuscritos. Este modelo aprovecha las capacidades de generalización de sistemas de reconocimiento, combinando redes neuronales artificiales y n-gramas de caracteres. Se discute la evaluación de dicho sistema y, como consecuencia, se propone una nueva medida de evaluación. Con el fin de mejorar los resultados con respecto a dicha métrica, se evalúan diferentes estrategias de corrección de errores.[EN] In our work we present a single Deep Learning based model for the automatic transcription and Named Entity Recognition of handwritten texts. Such model leverages the generalization capabilities of recognition systems, combining Artificial Neural Networks and n-gram character models. The evaluation of said system is discussed and, as a consequence, a new evaluation metric is proposed. As a means to improve the results in regards to such metric, different error correction strategies are assessed.Villanova Aparisi, D. (2021). Named Entity Recognition in multilingual handwritten texts. Universitat Politècnica de València. http://hdl.handle.net/10251/174942TFG

    Beyond One-hot Encoding: lower dimensional target embedding

    Full text link
    Target encoding plays a central role when learning Convolutional Neural Networks. In this realm, One-hot encoding is the most prevalent strategy due to its simplicity. However, this so widespread encoding schema assumes a flat label space, thus ignoring rich relationships existing among labels that can be exploited during training. In large-scale datasets, data does not span the full label space, but instead lies in a low-dimensional output manifold. Following this observation, we embed the targets into a low-dimensional space, drastically improving convergence speed while preserving accuracy. Our contribution is two fold: (i) We show that random projections of the label space are a valid tool to find such lower dimensional embeddings, boosting dramatically convergence rates at zero computational cost; and (ii) we propose a normalized eigenrepresentation of the class manifold that encodes the targets with minimal information loss, improving the accuracy of random projections encoding while enjoying the same convergence rates. Experiments on CIFAR-100, CUB200-2011, Imagenet, and MIT Places demonstrate that the proposed approach drastically improves convergence speed while reaching very competitive accuracy rates.Comment: Published at Image and Vision Computin

    Application of Recognition Input Squinting and Error-Correcting Output Coding to Convolutional Neural Networks

    Get PDF
    The Convolutional Neural Network (CNN) is a type of artificial neural network that is successful in addressing many computer vision classification problems. This thesis considers problems related to optical character recognition by CNN when few training samples are available. Two techniques are proposed that can be used to improve the application of CNNs to such problems and these benefits are demonstrated experimentally on subsets of two labelled databases: MNIST (handwritten digits) and CENPARMI-MPC (machineprinted characters). The first technique is novel and is called “Recognition Input Squinting”. It involves taking the input image to be recognized and applying a set of geometric transformations on it to produce a set of squinted images. The trained CNN classifier then recognizes each of these generated input images and computes an overall recognition confidence score. It is shown that this technique yields superior recognition precision as compared to the case where a single input image is recognized without squinting. The second technique is an application of the Error-Correcting Output Coding technique to the CNN. Each class to be recognized is assigned a codeword from an appropriately chosen error-correcting code’s codebook and the CNN is trained using these codeword labels. At recognition time, the output class is selected according to a minimum code distance criterion. It is shown that this technique provides better recognition precision than when the classic place output coding is used

    Image speech combination for interactive computer assisted transcription of handwritten documents

    Full text link
    [EN] Handwritten document transcription aims to obtain the contents of a document to provide efficient information access to, among other, digitised historical documents. The increasing number of historical documents published by libraries and archives makes this an important task. In this context, the use of image processing and understanding techniques in conjunction with assistive technologies reduces the time and human effort required for obtaining the final perfect transcription. The assistive transcription system proposes a hypothesis, usually derived from a recognition process of the handwritten text image. Then, the professional transcriber feedback can be used to obtain an improved hypothesis and speed-up the final transcription. In this framework, a speech signal corresponding to the dictation of the handwritten text can be used as an additional source of information. This multimodal approach, that combines the image of the handwritten text with the speech of the dictation of its contents, could make better the hypotheses (initial and improved) offered to the transcriber. In this paper we study the feasibility of a multimodal interactive transcription system for an assistive paradigm known as Computer Assisted Transcription of Text Images. Different techniques are tested for obtaining the multimodal combination in this framework. The use of the proposed multimodal approach reveals a significant reduction of transcription effort with some multimodal combination techniques, allowing for a faster transcription process.Work partially supported by projects READ-674943 (European Union's H2020), SmartWays-RTC-2014-1466-4 (MINECO, Spain), and CoMUN-HaT-TIN2015-70924-C2-1-R (MINECO/FEDER), and by Generalitat Valenciana (GVA), Spain under reference PROMETEOII/2014/030.Granell, E.; Romero, V.; Martínez-Hinarejos, C. (2019). Image speech combination for interactive computer assisted transcription of handwritten documents. Computer Vision and Image Understanding. 180:74-83. https://doi.org/10.1016/j.cviu.2019.01.009S748318

    PUBLIC OCR SIGN AGE RECOGNITION WITH SKEW & SLANT CORRECTION FOR VISUALLY IMP AIRED PEOPLE

    Get PDF
    This paper presents an OCR hybrid recognition model for the Visually Impaired People (VIP). The VIP often encounters problems navigating around independently because they are blind or have poor vision. They are always being discriminated due to their limitation which can lead to depression to the VIP. Thus, they require an efficient technological assistance to help them in their daily activity. The objective of this paper is to propose a hybrid model for Optical Character Recognition (OCR) to detect and correct skewed and slanted character of public signage. The proposed hybrid model should be able to integrate with speech synthesizer for VIP signage recognition. The proposed hybrid model will capture an image of a public signage to be converted into machine readable text in a text file. The text will then be read by a speech synthesizer and translated to voice as the output. In the paper, hybrid model which consist of Canny Method, Hough Transformation and Shearing Transformation are used to detect and correct skewed and slanted images. An experiment was conducted to test the hybrid model performance on 5 blind folded subjects. The OCR hybrid recognition model has successfully achieved a Recognition Rate (RR) of 82. 7%. This concept of public signage recognition is being proven by the proposed hybrid model which integrates OCR and speech synthesizer
    corecore