379 research outputs found

    Handwritten Devanagari numeral recognition

    Get PDF
    Optical character recognition (OCR) plays a very vital role in today’s modern world. OCR can be useful for solving many complex problems and thus making human’s job easier. In OCR we give a scanned digital image or handwritten text as the input to the system. OCR can be used in postal department for sorting of the mails and in other offices. Much work has been done for English alphabets but now a day’s Indian script is an active area of interest for the researchers. Devanagari is on such Indian script. Research is going on for the recognition of alphabets but much less concentration is given on numerals. Here an attempt was made for the recognition of Devanagari numerals. The main part of any OCR system is the feature extraction part because more the features extracted more is the accuracy. Here two methods were used for the process of feature extraction. One of the method was moment based method. There are many moment based methods but we have preferred the Tchebichef moment. Tchebichef moment was preferred because of its better image representation capability. The second method was based on the contour curvature. Contour is a very important boundary feature used for finding similarity between shapes. After the process of feature extraction, the extracted feature has to be classified and for the same Artificial Neural Network (ANN) was used. There are many classifier but we preferred ANN because it is easy to handle and less error prone and apart from that its accuracy is much higher compared to other classifier. The classification was done individually with the two extracted features and finally the features were cascaded to increase the accuracy

    A new approach for centerline extraction in handwritten strokes: an application to the constitution of a code book

    Get PDF
    International audienceWe present in this paper a new method of analysis and decomposition of handwritten documents into glyphs (graphemes) and their associated code book. The different techniques that are involved in this paper are inspired by image processing methods in a large sense and mathematical models implying graph coloring. Our approaches provide firstly a rapid and detailed characterization of handwritten shapes based on dynamic tracking of the handwriting (curvature, thickness, direction, etc.) and also a very efficient analysis method for the categorization of basic shapes (graphemes). The tools that we have produced enable paleographers to study quickly and more accurately a large volume of manuscripts and to extract a large number of characteristics that are specific to an individual or an era

    Document preprocessing and fuzzy unsupervised character classification

    Get PDF
    This dissertation presents document preprocessing and fuzzy unsupervised character classification for automatically reading daily-received office documents that have complex layout structures, such as multiple columns and mixed-mode contents of texts, graphics and half-tone pictures. First, the block segmentation algorithm is performed based on a simple two-step run-length smoothing to decompose a document into single-mode blocks. Next, the block classification is performed based on the clustering rules to classify each block into one of the types such as text, horizontal or vertical lines, graphics, and pictures. The mean white-to-black transition is shown as an invariance for textual blocks, and is useful for block discrimination. A fuzzy model for unsupervised character classification is designed to improve the robustness, correctness, and speed of the character recognition system. The classification procedures are divided into two stages. The first stage separates the characters into seven typographical categories based on word structures of a text line. The second stage uses pattern matching to classify the characters in each category into a set of fuzzy prototypes based on a nonlinear weighted similarity function. A fuzzy model of unsupervised character classification, which is more natural in the representation of prototypes for character matching, is defined and the weighted fuzzy similarity measure is explored. The characteristics of the fuzzy model are discussed and used in speeding up the classification process. After classification, the character recognition procedure is simply applied on the limited versions of the fuzzy prototypes. To avoid information loss and extra distortion, an topography-based approach is proposed to apply directly on the fuzzy prototypes to extract the skeletons. First, a convolution by a bell-shaped function is performed to obtain a smooth surface. Second, the ridge points are extracted by rule-based topographic analysis of the structure. Third, a membership function is assigned to ridge points with values indicating the degrees of membership with respect to the skeleton of an object. Finally, the significant ridge points are linked to form strokes of skeleton, and the clues of eigenvalue variation are used to deal with degradation and preserve connectivity. Experimental results show that our algorithm can reduce the deformation of junction points and correctly extract the whole skeleton although a character is broken into pieces. For some characters merged together, the breaking candidates can be easily located by searching for the saddle points. A pruning algorithm is then applied on each breaking position. At last, a multiple context confirmation can be applied to increase the reliability of breaking hypotheses

    Shape analysis and description based on the isometric invariances of topological skeletonization

    Get PDF
    ilustracionesIn this dissertation, we explore the problem of how to describe the shape of an object in 2D and 3D with a set of features that are invariant to isometric transformations. We focus to based our approach on the well-known Medial Axis Transform and its topological properties. We aim to study two problems. The first is how to find a shape representation of a segmented object that exhibits rotation, translation, and reflection invariance. The second problem is how to build a machine learning pipeline that uses the isometric invariance of the shape representation to do both classification and retrieval. Our proposed solution demonstrates competitive results compared to state-of-the-art approaches. We based our shape representation on the medial axis transform (MAT), sometimes called the topological skeleton. Accepted and well-studied properties of the medial axis include: homotopy preservation, rotation invariance, mediality, one pixel thickness, and the ability to fully reconstruct the object. These properties make the MAT a suitable input to create shape features; however, several problems arise because not all skeletonization methods satisfy all the above-mentioned properties at the same time. In general, skeletons based on thinning approaches preserve topology but are noise sensitive and do not allow a proper reconstruction. They are also not invariant to rotations. Voronoi skeletons also preserve topology and are rotation invariant, but do not have information about the thickness of the object, making reconstruction impossible. The Voronoi skeleton is an approximation of the real skeleton. The denser the sampling of the boundary, the better the approximation; however, a denser sampling makes the Voronoi diagram more computationally expensive. In contrast, distance transform methods allow the reconstruction of the original object by providing the distance from every pixel in the skeleton to the boundary. Moreover, they exhibit an acceptable degree of the properties listed above, but noise sensitivity remains an issue. Therefore, we selected distance transform medial axis methods as our skeletonization strategy, and focused on creating a new noise-free approach to solve the contour noise problem. To effectively classify an object, or perform any other task with features based on its shape, the descriptor needs to be a normalized, compact form: Φ\Phi should map every shape Ω\Omega to the same vector space Rn\mathrm{R}^{n}. This is not possible with skeletonization methods because the skeletons of different objects have different numbers of branches and different numbers of points, even when they belong to the same category. Consequently, we developed a strategy to extract features from the skeleton through the map Φ\Phi, which we used as an input to a machine learning approach. After developing our method for robust skeletonization, the next step is to use such skeleton into the machine learning pipeline to classify object into previously defined categories. We developed a set of skeletal features that were used as input data to the machine learning architectures. We ran experiments on MPEG7 and ModelNet40 dataset to test our approach in both 2D and 3D. Our experiments show results comparable with the state-of-the-art in shape classification and retrieval. Our experiments also show that our pipeline and our skeletal features exhibit some degree of invariance to isometric transformations. In this study, we sought to design an isometric invariant shape descriptor through robust skeletonization enforced by a feature extraction pipeline that exploits such invariance through a machine learning methodology. We conducted a set of classification and retrieval experiments over well-known benchmarks to validate our proposed method. (Tomado de la fuente)En esta disertación se explora el problema de cómo describir la forma de un objeto en 2D y 3D con un conjunto de características que sean invariantes a transformaciones isométricas. La metodología propuesta en este documento se enfoca en la Transformada del Eje Medio (Medial Axis Transform) y sus propiedades topológicas. Nuestro objetivo es estudiar dos problemas. El primero es encontrar una representación matemática de la forma de un objeto que exhiba invarianza a las operaciones de rotación, translación y reflexión. El segundo problema es como construir un modelo de machine learning que use esas invarianzas para las tareas de clasificación y consulta de objetos a través de su forma. El método propuesto en esta tesis muestra resultados competitivos en comparación con otros métodos del estado del arte. En este trabajo basamos nuestra representación de forma en la transformada del eje medio, a veces llamada esqueleto topológico. Algunas propiedades conocidas y bien estudiadas de la transformada del eje medio son: conservación de la homotopía, invarianza a la rotación, su grosor consiste en un solo pixel (1D), y la habilidad para reconstruir el objeto original a través de ella. Estas propiedades hacen de la transformada del eje medio un punto de partida adecuado para crear características de forma. Sin embargo, en este punto surgen varios problemas dado que no todos los métodos de esqueletización satisfacen, al mismo tiempo, todas las propiedades mencionadas anteriormente. En general, los esqueletos basados en enfoques de erosión morfológica conservan la topología del objeto, pero son sensibles al ruido y no permiten una reconstrucción adecuada. Además, no son invariantes a las rotaciones. Otro método de esqueletización son los esqueletos de Voronoi. Los esqueletos de Voronoi también conservan la topología y son invariantes a la rotación, pero no tienen información sobre el grosor del objeto, lo que hace imposible su reconstrucción. Cuanto más denso sea el muestreo del contorno del objeto, mejor será la aproximación. Sin embargo, un muestreo más denso hace que el diagrama de Voronoi sea más costoso computacionalmente. Por el contrario, los métodos basados en la transformada de la distancia permiten la reconstrucción del objeto original, ya que proporcionan la distancia desde cada píxel del esqueleto hasta su punto más cercano en el contorno. Además, exhiben un grado aceptable de las propiedades enumeradas anteriormente, aunque la sensibilidad al ruido sigue siendo un problema. Por lo tanto, en este documento seleccionamos los métodos basados en la transformada de la distancia como nuestra estrategia de esqueletización, y nos enfocamos en crear un nuevo enfoque que resuelva el problema del ruido en el contorno. Para clasificar eficazmente un objeto o realizar cualquier otra tarea con características basadas en su forma, el descriptor debe ser compacto y estar normalizado: Φ\Phi debe relacionar cada forma Ω\Omega al mismo espacio vectorial Rn\mathrm{R}^{n}. Esto no es posible con los métodos de esqueletización en el estado del arte, porque los esqueletos de diferentes objetos tienen diferentes números de ramas y diferentes números de puntos incluso cuando pertenecen a la misma categoría. Consecuentemente, en nuestra propuesta desarrollamos una estrategia para extraer características del esqueleto a través de la función Φ\Phi, que usamos como entrada para un enfoque de aprendizaje automático. % TODO completar con resultados. Después de desarrollar nuestro método de esqueletización robusta, el siguiente paso es usar dicho esqueleto en un modelo de aprendizaje de máquina para clasificar el objeto en categorías previamente definidas. Para ello se desarrolló un conjunto de características basadas en el eje medio que se utilizaron como datos de entrada para la arquitectura de aprendizaje automático. Realizamos experimentos en los conjuntos de datos: MPEG7 y ModelNet40 para probar nuestro enfoque tanto en 2D como en 3D. Nuestros experimentos muestran resultados comparables con el estado del arte en clasificación y consulta de formas (retrieval). Nuestros experimentos también muestran que el modelo desarrollado junto con nuestras características basadas en el eje medio son invariantes a las transformaciones isométricas. (Tomado de la fuente)Beca para Doctorados Nacionales de Colciencias, convocatoria 725 de 2015DoctoradoDoctor en IngenieríaVisión por computadora y aprendizaje automátic

    A Relaxation Scheme for Mesh Locality in Computer Vision.

    Get PDF
    Parallel processing has been considered as the key to build computer systems of the future and has become a mainstream subject in Computer Science. Computer Vision applications are computationally intensive that require parallel approaches to exploit the intrinsic parallelism. This research addresses this problem for low-level and intermediate-level vision problems. The contributions of this dissertation are a unified scheme based on probabilistic relaxation labeling that captures localities of image data and the ability of using this scheme to develop efficient parallel algorithms for Computer Vision problems. We begin with investigating the problem of skeletonization. The technique of pattern match that exhausts all the possible interaction patterns between a pixel and its neighboring pixels captures the locality of this problem, and leads to an efficient One-pass Parallel Asymmetric Thinning Algorithm (OPATA\sb8). The use of 8-distance in this algorithm, or chessboard distance, not only improves the quality of the resulting skeletons, but also improves the efficiency of the computation. This new algorithm plays an important role in a hierarchical route planning system to extract high level typological information of cross-country mobility maps which greatly speeds up the route searching over large areas. We generalize the neighborhood interaction description method to include more complicated applications such as edge detection and image restoration. The proposed probabilistic relaxation labeling scheme exploit parallelism by discovering local interactions in neighboring areas and by describing them effectively. The proposed scheme consists of a transformation function and a dictionary construction method. The non-linear transformation function is derived from Markov Random Field theory. It efficiently combines evidences from neighborhood interactions. The dictionary construction method provides an efficient way to encode these localities. A case study applies the scheme to the problem of edge detection. The relaxation step of this edge-detection algorithm greatly reduces noise effects, gets better edge localization such as line ends and corners, and plays a crucial rule in refining edge outputs. The experiments on both synthetic and natural images show that our algorithm converges quickly, and is robust in noisy environment
    corecore