49 research outputs found

    Analyzing Ancient Maya Glyph Collections with Contextual Shape Descriptors

    Get PDF
    This paper presents an original approach for shape-based analysis of ancient Maya hieroglyphs based on an interdisciplinary collaboration between computer vision and archeology. Our work is guided by realistic needs of archaeologists and scholars who critically need support for search and retrieval tasks in large Maya imagery collections. Our paper has three main contributions. First, we introduce an overview of our interdisciplinary approach towards the improvement of the documentation, analysis, and preservation of Maya pictographic data. Second, we present an objective evaluation of the performance of two state-of-the-art shape-based contextual descriptors (Shape Context and Generalized Shape Context) in retrieval tasks, using two datasets of syllabic Maya glyphs. Based on the identification of their limitations, we propose a new shape descriptor named Histogram of Orientation Shape Context (HOOSC), which is more robust and suitable for description of Maya hieroglyphs. Third, we present what to our knowledge constitutes the first automatic analysis of visual variability of syllabic glyphs along historical periods and across geographic regions of the ancient Maya world via the HOOSCdescriptor. Overall, our approach is promising, as it improves performance on the retrieval task, has been successfully validated under an epigraphic viewpoint, and has the potential of offering both novel insights in archeology and practical solutions for real daily scholar need

    Analyzing ancient Maya glyph collections with Contextual Shape Descriptors

    Get PDF
    This paper presents an original approach for shape-based analysis of ancient Maya hieroglyphs based on an interdisciplinary collaboration between computer vision and archaeology. Our work is guided by realistic needs of archaeologists and scholars who critically need support for search and retrieval tasks in large Maya imagery collections. Our paper has three main contributions. First, we introduce an overview of our interdisciplinary approach towards the improvement of the documentation, analysis, and preservation of Maya pictographic data. Second, we present an objective evaluation of the performance of two state-of-the-art shape-based contextual descriptors (Shape Context and Generalized Shape Context) in retrieval tasks, using two datasets of syllabic Maya glyphs. Based on the identification of their limitations, we propose a new shape descriptor named HOOSC, which is more robust and suitable for description of Maya hieroglyphs. Third, we present what to our knowledge constitutes the first automatic analysis of visual variability of syllabic glyphs along historical periods and across geographic regions of the ancient Maya world via the HOOSC descriptor. Overall, our approach is promising, as it improves performance on the retrieval task, is successfully validated under an epigraphic viewpoint, and has the potential of offering both novel insights in archaeology and practical solutions for real daily scholar needs

    The Mesoamerican Corpus of Formative Period Art and Writing

    Get PDF
    This project explores the origins and development of the first writing in the New World by constructing a comprehensive database of Formative period, 1500-400 BCE, iconography and a suite of database-driven digital tools. In collaboration with two of the largest repositories of Formative period Mesoamerican art in Mexico, the project integrates the work of archaeologists, art historians, and scientific computing specialists to plan and begin the production of a database, digital assets, and visual search software that permit the visualization of spatial, chronological, and contextual relationships among iconographic and archaeological datasets. These resources will eventually support mobile and web based applications that allow for the search, comparison, and analysis of a corpus of material currently only partially documented. The start-up phase will generate a functional prototype database, project website, wireframe user interfaces, and a report summarizing project development

    Transferring Neural Representations for Low-dimensional Indexing of Maya Hieroglyphic Art

    Get PDF
    We analyze the performance of deep neural architectures for extracting shape representations of binary images, and for generating low-dimensional representations of them. In particular, we focus on indexing binary images exhibiting compounds of Maya hieroglyphic signs, referred to as glyph-blocks, which constitute a very challenging dataset of arts given their visual complexity and large stylistic variety. More precisely, we demonstrate empirically that intermediate outputs of convolutional neural networks can be used as representations for complex shapes, even when their parameters are trained on gray-scale images, and that these representations can be more robust than traditional handcrafted features. We also show that it is possible to compress such representations up to only three dimensions without harming much of their discriminative structure, such that effective visualization of Maya hieroglyphs can be rendered for subsequent epigraphic analysis

    Multimedia Analysis and Access of Ancient Maya Epigraphy

    Get PDF
    This article presents an integrated framework for multimedia access and analysis of ancient Maya epigraphic resources, which is developed as an interdisciplinary effort involving epigraphers (someone who deciphers ancient inscriptions) and computer scientists. Our work includes several contributions: a definition of consistent conventions to generate high-quality representations of Maya hieroglyphs from the three most valuable ancient codices, which currently reside in European museums and institutions; a digital repository system for glyph annotation and management; as well as automatic glyph retrieval and classification methods. We study the combination of statistical Maya language models and shape representation within a hieroglyph retrieval system, the impact of applying language models extracted from different hieroglyphic resources on various data types, and the effect of shape representation choices for glyph classification. A novel Maya hieroglyph data set is given, which can be used for shape analysis benchmarks, and also to study the ancient Maya writing system

    Visual Analysis of Maya Glyphs via Crowdsourcing and Deep Learning

    Get PDF
    In this dissertation, we study visual analysis methods for complex ancient Maya writings. The unit sign of a Maya text is called glyph, and may have either semantic or syllabic significance. There are over 800 identified glyph categories, and over 1400 variations across these categories. To enable fast manipulation of data by scholars in Humanities, it is desirable to have automatic visual analysis tools such as glyph categorization, localization, and visualization. Analysis and recognition of glyphs are challenging problems. The same patterns may be observed in different signs but with different compositions. The inter-class variance can thus be significantly low. On the opposite, the intra-class variance can be high, as the visual variants within the same semantic category may differ to a large extent except for some patterns specific to the category. Another related challenge of Maya writings is the lack of a large dataset to study the glyph patterns. Consequently, we study local shape representations, both knowledge-driven and data-driven, over a set of frequent syllabic glyphs as well as other binary shapes, i.e. sketches. This comparative study indicates that a large data corpus and a deep network architecture are needed to learn data-driven representations that can capture the complex compositions of local patterns. To build a large glyph dataset in a short period of time, we study a crowdsourcing approach as an alternative to time-consuming data preparation of experts. Specifically, we work on individual glyph segmentation out of glyph-blocks from the three remaining codices (i.e. folded bark pages painted with a brush). With gradual steps in our crowdsourcing approach, we observe that providing supervision and careful task design are key aspects for non-experts to generate high-quality annotations. This way, we obtain a large dataset (over 9000) of individual Maya glyphs. We analyze this crowdsourced glyph dataset with both knowledge-driven and data-driven visual representations. First, we evaluate two competitive knowledge-driven representations, namely Histogram of Oriented Shape Context and Histogram of Oriented Gradients. Secondly, thanks to the large size of the crowdsourced dataset, we study visual representation learning with deep Convolutional Neural Networks. We adopt three data-driven approaches: assess- ing representations from pretrained networks, fine-tuning the last convolutional block of a pretrained network, and training a network from scratch. Finally, we investigate different glyph visualization tasks based on the studied representations. First, we explore the visual structure of several glyph corpora by applying a non-linear dimensionality reduction method, namely t-distributed Stochastic Neighborhood Embedding, Secondly, we propose a way to inspect the discriminative parts of individual glyphs according to the trained deep networks. For this purpose, we use the Gradient-weighted Class Activation Mapping method and highlight the network activations as a heatmap visualization over an input image. We assess whether the highlighted parts correspond to distinguishing parts of glyphs in a perceptual crowdsourcing study. Overall, this thesis presents a promising crowdsourcing approach, competitive data-driven visual representations, and interpretable visualization methods that can be applied to explore various other Digital Humanities datasets

    Maya Codical Glyph Segmentation: A Crowdsourcing Approach

    Get PDF
    This paper focuses on the crowd-annotation of an ancient Maya glyph dataset derived from the three ancient codices that survived up to date. More precisely, non-expert annotators are asked to segment glyph-blocks into their constituent glyph entities. As a means of supervision, available glyph variants are provided to the annotators during the crowdsourcing task. Compared to object recognition in natural images or handwriting transcription tasks, designing an engaging task and dealing with crowd behavior is challenging in our case. This challenge originates from the inherent complexity of Maya writing and an incomplete understanding of the signs and semantics in the existing catalogs. We elaborate on the evolution of the crowdsourcing task design, and discuss the choices for providing supervision during the task. We analyze the distributions of similarity and task difficulty scores, and the segmentation performance of the crowd. A unique dataset of over 9000 Maya glyphs from 291 categories individually segmented from the three codices was created and will be made publicly available thanks to this process. This dataset lends itself to automatic glyph classification tasks. We provide baseline methods for glyph classification using traditional shape descriptors and convolutional neural networks

    Evaluating Shape Descriptors for Detection of Maya Hieroglyphs

    Get PDF
    Abstract. In this work we address the problem of detecting instances of complex shapes in binary images. We investigated the effects of com-bining DoG and Harris-Laplace interest points with SIFT and HOOSC descriptors. Also, we propose the use of a retrieval-based detection frame-work suitable to deal with images that are sparsely annotated, and where the objects of interest are very small in proportion to the total size of the image. Our initial results suggest that corner structures are suitable points to compute local descriptors for binary images, although there is the need for better methods to estimate their appropriate characteristic scale when used on binary images
    corecore