36 research outputs found

    Analyzing Ancient Maya Glyph Collections with Contextual Shape Descriptors

    Get PDF
    This paper presents an original approach for shape-based analysis of ancient Maya hieroglyphs based on an interdisciplinary collaboration between computer vision and archeology. Our work is guided by realistic needs of archaeologists and scholars who critically need support for search and retrieval tasks in large Maya imagery collections. Our paper has three main contributions. First, we introduce an overview of our interdisciplinary approach towards the improvement of the documentation, analysis, and preservation of Maya pictographic data. Second, we present an objective evaluation of the performance of two state-of-the-art shape-based contextual descriptors (Shape Context and Generalized Shape Context) in retrieval tasks, using two datasets of syllabic Maya glyphs. Based on the identification of their limitations, we propose a new shape descriptor named Histogram of Orientation Shape Context (HOOSC), which is more robust and suitable for description of Maya hieroglyphs. Third, we present what to our knowledge constitutes the first automatic analysis of visual variability of syllabic glyphs along historical periods and across geographic regions of the ancient Maya world via the HOOSCdescriptor. Overall, our approach is promising, as it improves performance on the retrieval task, has been successfully validated under an epigraphic viewpoint, and has the potential of offering both novel insights in archeology and practical solutions for real daily scholar need

    Analyzing ancient Maya glyph collections with Contextual Shape Descriptors

    Get PDF
    This paper presents an original approach for shape-based analysis of ancient Maya hieroglyphs based on an interdisciplinary collaboration between computer vision and archaeology. Our work is guided by realistic needs of archaeologists and scholars who critically need support for search and retrieval tasks in large Maya imagery collections. Our paper has three main contributions. First, we introduce an overview of our interdisciplinary approach towards the improvement of the documentation, analysis, and preservation of Maya pictographic data. Second, we present an objective evaluation of the performance of two state-of-the-art shape-based contextual descriptors (Shape Context and Generalized Shape Context) in retrieval tasks, using two datasets of syllabic Maya glyphs. Based on the identification of their limitations, we propose a new shape descriptor named HOOSC, which is more robust and suitable for description of Maya hieroglyphs. Third, we present what to our knowledge constitutes the first automatic analysis of visual variability of syllabic glyphs along historical periods and across geographic regions of the ancient Maya world via the HOOSC descriptor. Overall, our approach is promising, as it improves performance on the retrieval task, is successfully validated under an epigraphic viewpoint, and has the potential of offering both novel insights in archaeology and practical solutions for real daily scholar needs

    Visual Analysis of Maya Glyphs via Crowdsourcing and Deep Learning

    Get PDF
    In this dissertation, we study visual analysis methods for complex ancient Maya writings. The unit sign of a Maya text is called glyph, and may have either semantic or syllabic significance. There are over 800 identified glyph categories, and over 1400 variations across these categories. To enable fast manipulation of data by scholars in Humanities, it is desirable to have automatic visual analysis tools such as glyph categorization, localization, and visualization. Analysis and recognition of glyphs are challenging problems. The same patterns may be observed in different signs but with different compositions. The inter-class variance can thus be significantly low. On the opposite, the intra-class variance can be high, as the visual variants within the same semantic category may differ to a large extent except for some patterns specific to the category. Another related challenge of Maya writings is the lack of a large dataset to study the glyph patterns. Consequently, we study local shape representations, both knowledge-driven and data-driven, over a set of frequent syllabic glyphs as well as other binary shapes, i.e. sketches. This comparative study indicates that a large data corpus and a deep network architecture are needed to learn data-driven representations that can capture the complex compositions of local patterns. To build a large glyph dataset in a short period of time, we study a crowdsourcing approach as an alternative to time-consuming data preparation of experts. Specifically, we work on individual glyph segmentation out of glyph-blocks from the three remaining codices (i.e. folded bark pages painted with a brush). With gradual steps in our crowdsourcing approach, we observe that providing supervision and careful task design are key aspects for non-experts to generate high-quality annotations. This way, we obtain a large dataset (over 9000) of individual Maya glyphs. We analyze this crowdsourced glyph dataset with both knowledge-driven and data-driven visual representations. First, we evaluate two competitive knowledge-driven representations, namely Histogram of Oriented Shape Context and Histogram of Oriented Gradients. Secondly, thanks to the large size of the crowdsourced dataset, we study visual representation learning with deep Convolutional Neural Networks. We adopt three data-driven approaches: assess- ing representations from pretrained networks, fine-tuning the last convolutional block of a pretrained network, and training a network from scratch. Finally, we investigate different glyph visualization tasks based on the studied representations. First, we explore the visual structure of several glyph corpora by applying a non-linear dimensionality reduction method, namely t-distributed Stochastic Neighborhood Embedding, Secondly, we propose a way to inspect the discriminative parts of individual glyphs according to the trained deep networks. For this purpose, we use the Gradient-weighted Class Activation Mapping method and highlight the network activations as a heatmap visualization over an input image. We assess whether the highlighted parts correspond to distinguishing parts of glyphs in a perceptual crowdsourcing study. Overall, this thesis presents a promising crowdsourcing approach, competitive data-driven visual representations, and interpretable visualization methods that can be applied to explore various other Digital Humanities datasets

    Le nuage de point intelligent

    Full text link
    Discrete spatial datasets known as point clouds often lay the groundwork for decision-making applications. E.g., we can use such data as a reference for autonomous cars and robot’s navigation, as a layer for floor-plan’s creation and building’s construction, as a digital asset for environment modelling and incident prediction... Applications are numerous, and potentially increasing if we consider point clouds as digital reality assets. Yet, this expansion faces technical limitations mainly from the lack of semantic information within point ensembles. Connecting knowledge sources is still a very manual and time-consuming process suffering from error-prone human interpretation. This highlights a strong need for domain-related data analysis to create a coherent and structured information. The thesis clearly tries to solve automation problematics in point cloud processing to create intelligent environments, i.e. virtual copies that can be used/integrated in fully autonomous reasoning services. We tackle point cloud questions associated with knowledge extraction – particularly segmentation and classification – structuration, visualisation and interaction with cognitive decision systems. We propose to connect both point cloud properties and formalized knowledge to rapidly extract pertinent information using domain-centered graphs. The dissertation delivers the concept of a Smart Point Cloud (SPC) Infrastructure which serves as an interoperable and modular architecture for a unified processing. It permits an easy integration to existing workflows and a multi-domain specialization through device knowledge, analytic knowledge or domain knowledge. Concepts, algorithms, code and materials are given to replicate findings and extend current applications.Les ensembles discrets de données spatiales, appelés nuages de points, forment souvent le support principal pour des scénarios d’aide à la décision. Par exemple, nous pouvons utiliser ces données comme référence pour les voitures autonomes et la navigation des robots, comme couche pour la création de plans et la construction de bâtiments, comme actif numérique pour la modélisation de l'environnement et la prédiction d’incidents... Les applications sont nombreuses et potentiellement croissantes si l'on considère les nuages de points comme des actifs de réalité numérique. Cependant, cette expansion se heurte à des limites techniques dues principalement au manque d'information sémantique au sein des ensembles de points. La création de liens avec des sources de connaissances est encore un processus très manuel, chronophage et lié à une interprétation humaine sujette à l'erreur. Cela met en évidence la nécessité d'une analyse automatisée des données relatives au domaine étudié afin de créer une information cohérente et structurée. La thèse tente clairement de résoudre les problèmes d'automatisation dans le traitement des nuages de points pour créer des environnements intelligents, c'est-àdire des copies virtuelles qui peuvent être utilisées/intégrées dans des services de raisonnement totalement autonomes. Nous abordons plusieurs problématiques liées aux nuages de points et associées à l'extraction des connaissances - en particulier la segmentation et la classification - la structuration, la visualisation et l'interaction avec les systèmes cognitifs de décision. Nous proposons de relier à la fois les propriétés des nuages de points et les connaissances formalisées pour extraire rapidement les informations pertinentes à l'aide de graphes centrés sur le domaine. La dissertation propose le concept d'une infrastructure SPC (Smart Point Cloud) qui sert d'architecture interopérable et modulaire pour un traitement unifié. Elle permet une intégration facile aux flux de travail existants et une spécialisation multidomaine grâce aux connaissances liée aux capteurs, aux connaissances analytiques ou aux connaissances de domaine. Plusieurs concepts, algorithmes, codes et supports sont fournis pour reproduire les résultats et étendre les applications actuelles.Diskrete räumliche Datensätze, so genannte Punktwolken, bilden oft die Grundlage für Entscheidungsanwendungen. Beispielsweise können wir solche Daten als Referenz für autonome Autos und Roboternavigation, als Ebene für die Erstellung von Grundrissen und Gebäudekonstruktionen, als digitales Gut für die Umgebungsmodellierung und Ereignisprognose verwenden... Die Anwendungen sind zahlreich und nehmen potenziell zu, wenn wir Punktwolken als Digital Reality Assets betrachten. Allerdings stößt diese Erweiterung vor allem durch den Mangel an semantischen Informationen innerhalb von Punkt-Ensembles auf technische Grenzen. Die Verbindung von Wissensquellen ist immer noch ein sehr manueller und zeitaufwendiger Prozess, der unter fehleranfälliger menschlicher Interpretation leidet. Dies verdeutlicht den starken Bedarf an domänenbezogenen Datenanalysen, um eine kohärente und strukturierte Information zu schaffen. Die Arbeit versucht eindeutig, Automatisierungsprobleme in der Punktwolkenverarbeitung zu lösen, um intelligente Umgebungen zu schaffen, d.h. virtuelle Kopien, die in vollständig autonome Argumentationsdienste verwendet/integriert werden können. Wir befassen uns mit Punktwolkenfragen im Zusammenhang mit der Wissensextraktion - insbesondere Segmentierung und Klassifizierung - Strukturierung, Visualisierung und Interaktion mit kognitiven Entscheidungssystemen. Wir schlagen vor, sowohl Punktwolkeneigenschaften als auch formalisiertes Wissen zu verbinden, um schnell relevante Informationen mithilfe von domänenzentrierten Grafiken zu extrahieren. Die Dissertation liefert das Konzept einer Smart Point Cloud (SPC) Infrastruktur, die als interoperable und modulare Architektur für eine einheitliche Verarbeitung dient. Es ermöglicht eine einfache Integration in bestehende Workflows und eine multidimensionale Spezialisierung durch Gerätewissen, analytisches Wissen oder Domänenwissen. Konzepte, Algorithmen, Code und Materialien werden zur Verfügung gestellt, um Erkenntnisse zu replizieren und aktuelle Anwendungen zu erweitern

    A tale of three plazas: the development and use of public spaces in a classic Maya ritual and residential complex at Xultun, Guatemala

    Full text link
    In this dissertation I examine the social functions of neighborhood plazas by tracing the development of a Classic Maya (AD 200-900) ritual and residential complex at the ancient city of Xultun, Guatemala. In ancient as in modern times, public open spaces were essential to urban life; yet their functions and meanings could vary within and among societies. Using archaeological and architectural data from three plazas and an adjacent residential complex, I identify a shift towards increased public spaces in the Late Classic period, and link this to the rising importance of displays of power for Xultun's growing population. Located on the northern periphery of Xultun, Los Aves, the focus of the study, is an architectural group consisting of a central residential area with three adjacent plazas to the east, west and northwest. During the Early Classic (AD 250-600) period, only one of the plazas had been built and the layout of the complex was balanced between public and private space. Residents carried out domestic activities within six modest patio groups and used a round platform in the western plaza, Plaza ColibrĂ­, for group rituals. The construction of two new plazas during the Late Classic period (AD 600-900) dramatically changed the composition of Los Aves, tripling the amount of public space. Dominating the neighborhood was a new, larger plaza, Plaza Tecolote, with monumental, ritual architecture that opened to the south towards the city center, easily accessible to those outside of Los Aves. An increase in population at this time necessitated the construction of more domestic structures within the house groups, reducing the amount of proximate patio spaces. Such activities now took place in a new, smaller plaza, Plaza Loro, located in the northwest of the complex, that contained broad steps for seating. In the Early Classic period, Los Aves contained equal parts public and private space, while in the Late Classic period public plazas dominated. I argue that as populations grew, public displays of power became increasingly important, and new, larger plazas were built to accommodate these events. This development broadens our understanding of Classic Maya urbanism

    Shape-based detection of Maya hieroglyphs using weighted bag representations

    No full text
    This work addresses the problem of detecting individual visual patterns in binary images, and more precisely, individual syllabic signs in large inscriptions of Maya hieroglyphs with high levels of visual complexity. The data we use corresponds to a corpus that is of great interest for archaeologists, and it poses a difficult challenge in terms of visual complexity. We introduce a new weighting function, which helps constructing more robust bag-of-visual-words representations for detection purposes. This weighting function depends on the ratio of intersection of the local descriptors, and their respective distances to the center of the bounding box that is under evaluation. As shown by our results, the use of the proposed weighted bag representation improves the detection rate with respect to a traditional bag construction. Also, we validate the use of an ad hoc methodology to approach the detection scenario through a retrieval setup. Our results show that this approach achieves better detection performance than the traditional sliding-windows approach when only a few data is available for training, as is the case of the Maya hieroglyphs. To the best of our knowledge, our work is among the first contributions that addresses the problem of shape detection using binary images, since the previous attempts to detect shapes rely on the use of intensity images

    Deity and Divine Agency in the Hebrew Bible: Cognitive Perspectives

    Get PDF
    This thesis interrogates the conceptualization of deity and divine agency in the Hebrew Bible, focusing particularly on the problem of the relationship of divine images and representatives to their patron deities. In order to move beyond the tendentiousness of previous scholarship that addresses this problem, I employ an interdisciplinary approach that will center cognitive linguistics and the cognitive science of religion, and also include biblical criticism, archaeology, anthropology, materiality studies, and other disciplines. I begin in Part One with a methodological discussion that describes the approaches being taken and interrogates some of the conceptual frameworks that have governed the previous scholarship on the question, such as “religion” and the practice of definition. It will then move on to discuss the concepts of agency and personhood, and how contemporary anthropological research on both can help inform our interrogation of the ancient world. Part Two begins the interrogation of the generic concept of deity, demonstrating that such concepts are products of the engagement of our intuitive and reflective reasoning with our cognitive ecologies, and that they build on our everyday conceptualizations of agency and personhood. These dynamics facilitate a view of divine agency as separable and communicable, which will be demonstrated to undergird the unique relationships understood to be shared by deities and their divine images. Chapter 4 employs a cognitive linguistic lens to propose semantic bases, domains, and profiles for the generic concept of deity in the Hebrew Bible. Part Three applies the models developed in Chapters 3 and 4 to an interrogation of YHWH as a deity and of YHWH’s divine agents, such as the ark of the covenant, the messenger of YHWH, and the very text of the Torah itself. The Conclusion summarizes findings and discusses implications for further research

    How Change Happens: A Theory of Philosophy of History, Social Change and Cultural Evolution

    Get PDF
    It is proposed that the ultimate cause of much historical, social and cultural change is the gradual accumulation of human knowledge of the environment. Human beings use the materials in their environment to meet their needs and increased human knowledge of the environment enables human needs to be met in a more efficient manner. Human needs direct human research into particular areas and this provides a direction for historical, social and cultural development. The human environment has a particular structure and human beings have a particular place in it so that human knowledge of the environment is acquired in a particular order. The simplest knowledge, or the knowledge closest to us, is acquired first and more complex knowledge, or knowledge further from us is acquired later. The order of discovery determines the course of human social and cultural history as knowledge of new and more efficient means of meeting human needs, results in new technology, which results in the development of new social and ideological systems. This means human history, or a major part of human history, had to follow a particular course, a course that is determined by the structure of the human environment. An examination of the structure of the human environment will reveal the particular order in which our discoveries had to be made. Given that a certain level of knowledge will result in a particular type of society, it is possible to ascertain the types of societies that were inevitable in human history. While it is not possible to make predictions about the future course of human history, it is possible to explain and understand why human history has followed a particular path and why it had to follow that particular path
    corecore