2 research outputs found

    Use of Markov processes in writing recognition

    Get PDF
    In this paper, we present a brief survey on the use of different types of Markov models in writing recognition . Recognition is done by a posteriori pattern class probability calculus . This computation implies several terms which, according to the dependency hypotheses akin to the considered application, can be decomposed in elementary conditional probabilities . Under the assumption that the pattern may be modeled as a uni- or two-dimensional stochastic process (random field) presenting Markovian properties, local maximisations of these probabilities result in maximum pattern likelihood . We have studied throughout the article several cases of subpattern probability conditioning. Each case is accompanied by practical illustrations related to the field of writing recognition .Dans cet article, nous présentons une étude sur l'emploi de différents types de modèles de Markov en reconnaissance de l'écriture. La reconnaissance est obtenue par calcul de la probabilité a posteriori de la classe d'une forme. Ce calcul fait intervenir plusieurs termes qui, suivant certaines hypothèses de dépendance liées à l'application traitée, peuvent se décomposer en probabilités conditionnelles élémentaires. Si l'on suppose que la forme suit un processus stochastique uni- ou bidimensionnel qui de plus vérifie les propriétés de Markov, alors la maximisation locale de ces probabilités permet l'atteinte d'un maximum de la vraisemblance de la forme. Nous avons étudié plusieurs cas de conditionnement des probabilités élémentaires des sous-formes. Chaque étude est accompagnée d'illustrations pratiques relatives au domaine de la reconnaissance de l'écriture imprimée et/ou manuscrite

    Stochastic Trajectory Modeling for Recognition of Unconstrained Handwritten Words

    No full text
    In this paper we describe an ooe-line handwritten word recognition (hwr) system applied to the identi- øcation of literal french check amounts. It consists of three successive levels denoted as character, word and phrase level, each of them being related to the previous ones via conditional probability distributions. Training is done on character samples extracted from amount images which are modeled as trajectories in some feature space. At word level, guided by a dictionary, an internal character segmentation algorithm is used in order to maximize a global word probability measure. A stochastic grammar for a priori grammar generation probability of a phrase is proposed at the last level. Results obtained on a 1779 amounts data base provided by the SRTP 1 are encouraging, showing our system open to further improvements. 1 Introduction Because of the large variety of handwriting styles, the recognition is very diOEcult. Dioeerent categories of styles (handprinted, pure cursive) may ..
    corecore