56 research outputs found

    A limited-size ensemble of homogeneous CNN/LSTMs for high-performance word classification

    Get PDF
    The strength of long short-term memory neural networks (LSTMs) that have been applied is more located in handling sequences of variable length than in handling geometric variability of the image patterns. In this paper, an end-to-end convolutional LSTM neural network is used to handle both geometric variation and sequence variability. The best results for LSTMs are often based on large-scale training of an ensemble of network instances. We show that high performances can be reached on a common benchmark set by using proper data augmentation for just five such networks using a proper coding scheme and a proper voting scheme. The networks have similar architectures (convolutional neural network (CNN): five layers, bidirectional LSTM (BiLSTM): three layers followed by a connectionist temporal classification (CTC) processing step). The approach assumes differently scaled input images and different feature map sizes. Three datasets are used: the standard benchmark RIMES dataset (French); a historical handwritten dataset KdK (Dutch); the standard benchmark George Washington (GW) dataset (English). Final performance obtained for the word-recognition test of RIMES was 96.6%, a clear improvement over other state-of-the-art approaches which did not use a pre-trained network. On the KdK and GW datasets, our approach also shows good results. The proposed approach is deployed in the Monk search engine for historical-handwriting collections

    Identification des régimes et regroupement des séquences pour la prévision des marchés financiers

    Get PDF
    Abstract : Regime switching analysis is extensively advocated to capture complex behaviors underlying financial time series for market prediction. Two main disadvantages in current approaches of regime identification are raised in the literature: 1) the lack of a mechanism for identifying regimes dynamically, restricting them to switching among a fixed set of regimes with a static transition probability matrix; 2) failure to utilize cross-sectional regime dependencies among time series, since not all the time series are synchronized to the same regime. As the numerical time series can be symbolized into categorical sequences, a third issue raises: 3) the lack of a meaningful and effective measure of the similarity between chronological dependent categorical values, in order to identify sequence clusters that could serve as regimes for market forecasting. In this thesis, we propose a dynamic regime identification model that can identify regimes dynamically with a time-varying transition probability, to address the first issue. For the second issue, we propose a cluster-based regime identification model to account for the cross-sectional regime dependencies underlying financial time series for market forecasting. For the last issue, we develop a dynamic order Markov model, making use of information underlying frequent consecutive patterns and sparse patterns, to identify the clusters that could serve as regimes identified on categorized financial time series. Experiments on synthetic and real-world datasets show that our two regime models show good performance on both regime identification and forecasting, while our dynamic order Markov clustering model also demonstrates good performance on identifying clusters from categorical sequences.L'analyse de changement de régime est largement préconisée pour capturer les comportements complexes sous-jacents aux séries chronologiques financières pour la prédiction du marché. Deux principaux problèmes des approches actuelles d'identifica-tion de régime sont soulevés dans la littérature. Il s’agit de: 1) l'absence d'un mécanisme d'identification dynamique des régimes. Ceci limite la commutation entre un ensemble fixe de régimes avec une matrice de probabilité de transition statique; 2) l’incapacité à utiliser les dépendances transversales des régimes entre les séries chronologiques, car toutes les séries chronologiques ne sont pas synchronisées sur le même régime. Étant donné que les séries temporelles numériques peuvent être symbolisées en séquences catégorielles, un troisième problème se pose: 3) l'absence d'une mesure significative et efficace de la similarité entre les séries chronologiques dépendant des valeurs catégorielles pour identifier les clusters de séquences qui pourraient servir de régimes de prévision du marché. Dans cette thèse, nous proposons un modèle d'identification de régime dynamique qui identifie dynamiquement des régimes avec une probabilité de transition variable dans le temps afin de répondre au premier problème. Ensuite, pour adresser le deuxième problème, nous proposons un modèle d'identification de régime basé sur les clusters. Notre modèle considère les dépendances transversales des régimes sous-jacents aux séries chronologiques financières avant d’effectuer la prévision du marché. Pour terminer, nous abordons le troisième problème en développant un modèle de Markov d'ordre dynamique, en utilisant les informations sous-jacentes aux motifs consécutifs fréquents et aux motifs clairsemés, pour identifier les clusters qui peuvent servir de régimes identifiés sur des séries chronologiques financières catégorisées. Nous avons mené des expériences sur des ensembles de données synthétiques et du monde réel. Nous démontrons que nos deux modèles de régime présentent de bonnes performances à la fois en termes d'identification et de prévision de régime, et notre modèle de clustering de Markov d'ordre dynamique produit également de bonnes performances dans l'identification de clusters à partir de séquences catégorielles

    Two bigrams based language model for auto correction of Arabic OCR errors

    Get PDF
    In Optical character recognition (OCR), the characteristics of Arabic text cause more errors than in English text.In this paper, a two bi-grams based language model that uses Wikipedia's database is presented.The method can perform auto detection and correction of non-word errors in Arabic OCR text, and auto detection of real word errors. The method consists of two parts: extracting the context information from Wikipedia's database, and implement the auto detection and correction of incorrect words.This method can be applied to any language with little modifications.The experimental results show successful extraction of context information from Wikipedia's articles. Furthermore, it also shows that using this method can reduce the error rate of Arabic OCR text

    Towards robust real-world historical handwriting recognition

    Get PDF
    In this thesis, we make a bridge from the past to the future by using artificial-intelligence methods for text recognition in a historical Dutch collection of the Natuurkundige Commissie that explored Indonesia (1820-1850). In spite of the successes of systems like 'ChatGPT', reading historical handwriting is still quite challenging for AI. Whereas GPT-like methods work on digital texts, historical manuscripts are only available as an extremely diverse collections of (pixel) images. Despite the great results, current DL methods are very data greedy, time consuming, heavily dependent on the human expert from the humanities for labeling and require machine-learning experts for designing the models. Ideally, the use of deep learning methods should require minimal human effort, have an algorithm observe the evolution of the training process, and avoid inefficient use of the already sparse amount of labeled data. We present several approaches towards dealing with these problems, aiming to improve the robustness of current methods and to improve the autonomy in training. We applied our novel word and line text recognition approaches on nine data sets differing in time period, language, and difficulty: three locally collected historical Latin-based data sets from Naturalis, Leiden; four public Latin-based benchmark data sets for comparability with other approaches; and two Arabic data sets. Using ensemble voting of just five neural networks, a level of accuracy was achieved which required hundreds of neural networks in earlier studies. Moreover, we increased the speed of evaluation of each training epoch without the need of labeled data

    Melhorando a precisão do reconhecimento de texto usando técnicas baseadas em sintaxe

    Get PDF
    Orientadores: Guido Costa Souza de Araújo, Marcio Machado PereiraDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Devido à grande quantidade de informações visuais disponíveis atualmente, a detecção e o reconhecimento de texto em imagens de cenas naturais começaram a ganhar importância nos últimos tempos. Seu objetivo é localizar regiões da imagem onde há texto e reconhecê-lo. Essas tarefas geralmente são divididas em duas partes: detecção de texto e reconhecimento de texto. Embora as técnicas para resolver esse problema tenham melhorado nos últimos anos, o uso excessivo de recursos de hardware e seus altos custos computacionais impactaram significativamente a execução de tais tarefas em sistemas integrados altamente restritos (por exemplo, celulares e TVs inteligentes). Embora existam métodos de detecção e reconhecimento de texto executados em tais sistemas, eles não apresentam bom desempenho quando comparados à soluções de ponta em outras plataformas de computação. Embora atualmente existam vários métodos de pós-correção que melhoram os resultados em documentos históricos digitalizados, há poucas explorações sobre o seu uso nos resultados de imagens de cenas naturais. Neste trabalho, exploramos um conjunto de métodos de pós-correção, bem como propusemos novas heuríticas para melhorar os resultados em imagens de cenas naturais, tendo como base de prototipação o software de reconhecimento de textos Tesseract. Realizamos uma análise com os principais métodos disponíveis na literatura para correção dos erros e encontramos a melhor combinação que incluiu os métodos de substituição, eliminação nos últimos caracteres e composição. Somado a isto, os resultados mostraram uma melhora quando introduzimos uma nova heurística baseada na frequência com que os possíveis resultados aparecem em bases de dados de magazines, jornais, textos de ficção, web, etc. Para localizar erros e evitar overcorrection foram consideradas diferentes restrições obtidas através do treinamento da base de dados do Tesseract. Selecionamos como melhor restrição a incerteza do melhor resultado obtido pelo Tesseract. Os experimentos foram realizados com sete banco de dados usados em sites de competição na área, considerando tanto banco de dados para desafio em reconhecimento de texto e aqueles com o desafio de detecção e reconhecimento de texto. Em todos os bancos de dados, tanto nos dados de treinamento como de testes, os resultados do Tesseract com o método proposto de pós-correção melhorou consideravelmente em comparação com os resultados obtidos somente com o TesseractAbstract: Due to a large amount of visual information available today, Text Detection and Recognition in scene images have begun to receive an increasing importance. The goal of this task is to locate regions of the image where there is text and recognize them. Such tasks are typically divided into two parts: Text Detection and Text Recognition. Although the techniques to solve this problem have improved in recent years, the excessive usage of hardware resources and its corresponding high computational costs have considerably impacted the execution of such tasks in highly constrained embedded systems (e.g., cellphones and smart TVs). Although there are Text Detection and Recognition methods that run in such systems they do not have good performance when compared to state-of-the-art solutions in other computing platforms. Although there are currently various post-correction methods to improve the results of scanned documents, there is a little effort in applying them on scene images. In this work, we explored a set of post-correction methods, as well as proposed new heuristics to improve the results in scene images, using the Tesseract text recognition software as a prototyping base. We performed an analysis with the main methods available in the literature to correct errors and found the best combination that included the methods of substitution, elimination in the last characters, and compounder. In addition, results showed an improvement when we introduced a new heuristic based on the frequency with which the possible results appear in the frequency databases for categories such as magazines, newspapers, fiction texts, web, etc. In order to locate errors and avoid overcorrection, different restrictions were considered through Tesseract with the training database. We selected as the best restriction the certainty of the best result obtained by Tesseract. The experiments were carried out with seven databases used in Text Recognition and Text Detection/Recognition competitions. In all databases, for both training and testing, the results of Tesseract with the proposed post-correction method considerably improved when compared to the results obtained only with TesseractMestradoCiência da ComputaçãoMestra em Ciência da Computação4716-1488887.335287/2019-00, 1774549FuncampCAPE

    Evaluation of preprocessors for neural network speaker verification

    Get PDF

    Unsupervised learning of pictures by genetic hibrydization of hidden Markov chain

    Get PDF
    This paper presents a learning algorithm using hidden Markov models (HMMs) and genetic algorithms (GAs) . Two standard problems to be solved with HMMs are how to determine the probabilities and the number of hidden states of the learned models . Generally, this number of states is determined either by the trial-error method that needs experimentation, or by the background knowledge available . The presented algorithm uses a GA in order to determine at the same time both the number of states and the probabilities of learned HMMs . This hybrid algorithm uses the Baum-Welch algorithm to optimise precisely the probabilities of HMMs . Several algorithms, either hybrid or not, are compared in a face recognition task . The obtained results highlight the strength of our approach for the concerned problem .Cet article présente un algorithme d'apprentissage non supervisé par chaînes de Markov cachées (CMC) et algorithmes génétiques (AG). Deux des problèmes rencontrés lors de l'utilisation des CMC sont de déterminer les probabilités de la CMC et le nombre d'états de cette chaîne. Bien souvent, ce nombre d'états est déterminé soit par expériences successives, soit à l'aide de connaissances a priori du domaine. L'algorithme présenté ici emploie un algorithme génétique afin de déterminer le nombre d'états cachés de la CMC ainsi que les différentes probabilités qui la constituent. Cet algorithme est couplé à l'algorithme de Baum-Welch qui permet une réestimation efficace des probabilités de la CMC. Différents algorithmes, hybrides ou non, sont comparés entre eux sur une application d'apprentissage et de reconnaissance d'images représentant des visages. Les résultats montrent la supériorité de l'approche génétique pour ce type de problème
    corecore