42 research outputs found

    Segmentation-Free Spotting of Cuneiform using Part-Structured Models

    Get PDF
    Cuneiform scripts constitute an immense source of information about ancient history, dating back almost four thousand years. Documents were written by imprinting wedgeshaped impressions into wet clay tablets, and current scholarly practice typically transcribes the resulting markings by hand with ink on paper. This work develops algorithmic methods for cuneiform script, combining feature extraction for cuneiform wedges with prior work on segmentation-free word spotting using part-structured models. We adapt the inkball model used for word spotting to treat wedge features as individual parts arranged in a tree structure. The geometric relationship between query and target is measured by the energy necessary to deform the tree structure. We also introduce an optimizing method for wedge feature extraction based on optimally assigning tablet structuring elements to hypothesized wedge models. Finally, we evaluate the method on a real-world dataset, and show that it outperforms the state of the art in cuneiform character spotting

    Analyzing Handwritten and Transcribed Symbols in Disparate Corpora

    Get PDF
    Cuneiform tablets appertain to the oldest textual artifacts used for more than three millennia and are comparable in amount and relevance to texts written in Latin or ancient Greek. These tablets are typically found in the Middle East and were written by imprinting wedge-shaped impressions into wet clay. Motivated by the increased demand for computerized analysis of documents within the Digital Humanities, we develop the foundation for quantitative processing of cuneiform script. Using a 3D-Scanner to acquire a cuneiform tablet or manually creating line tracings are two completely different representations of the same type of text source. Each representation is typically processed with its own tool-set and the textual analysis is therefore limited to a certain type of digital representation. To homogenize these data source a unifying minimal wedge feature description is introduced. It is extracted by pattern matching and subsequent conflict resolution as cuneiform is written densely with highly overlapping wedges. Similarity metrics for cuneiform signs based on distinct assumptions are presented. (i) An implicit model represents cuneiform signs using undirected mathematical graphs and measures the similarity of signs with graph kernels. (ii) An explicit model approaches the problem of recognition by an optimal assignment between the wedge configurations of two signs. Further, methods for spotting cuneiform script are developed, combining the feature descriptors for cuneiform wedges with prior work on segmentation-free word spotting using part-structured models. The ink-ball model is adapted by treating wedge feature descriptors as individual parts. The similarity metrics and the adapted spotting model are both evaluated on a real-world dataset outperforming the state-of-the-art in cuneiform sign similarity and spotting. To prove the applicability of these methods for computational cuneiform analysis, a novel approach is presented for mining frequent constellations of wedges resulting in spatial n-grams. Furthermore, a method for automatized transliteration of tablets is evaluated by employing structured and sequential learning on a dataset of parallel sentences. Finally, the conclusion outlines how the presented methods enable the development of new tools and computational analyses, which are objective and reproducible, for quantitative processing of cuneiform script

    Automatable Annotations – Image Processing and Machine Learning for Script in 3D and 2D with GigaMesh

    Get PDF
    Libraries, archives and museums hold vast numbers of objects with script in 3D such as inscriptions, coins, and seals, which provide valuable insights into the history of humanity. Cuneiform tablets in particular provide access to information on more than three millennia BC. Since these clay tablets require an extensive examination for transcription, we developed the modular GigaMesh software framework to provide high-contrast visualization of tablets captured with 3D acquisiton techniques. This framework was extended to provide digital drawings exported as XML-based Scalable Vector Graphics (SVG), which are the fundamental input of our approach inspired by machine-learning techniques based on the principle of word spotting. This results in a versatile symbol-spotting algorithm to retrieve graphical elements from drawings enabling automated annotations. Through data homogenization, we achieve compatibility to digitally born manual drawings, as well as to retro-digitized drawings. The latter are found in large Open Access databases, e.g. provided by the Cuneiform Database Library Initiative (CDLI). Ongoing and future work concerns the adaptation of filtering and graphical query techniques for two-dimensional raster images widely used within Digital Humanities research

    Cuneiform Character Similarity Using Graph Representations

    Get PDF
    Motivated by the increased demand for computerized analysis of documents within the Digital Humanities we are developing algorithms for cuneiform tablets, which contain the oldest handwritten script used for more than three millennia. These tablets are typically found in the Middle East and contain a total amount of written words comparable to all documents in Latin or ancient Greek. In previous work we have shown how to extract vector drawings from 3D-models similar to those manually drawn over digital photographs. Both types of drawings share the Scalable Vector Graphic (SVG) format representing the cuneiform characters as splines. These splines are transformed into a graph representation and extend these by triangulation. Based on graph kernel methods we show a similarity metric for cuneiform characters, which have higher degrees of freedom than handwriting with ink on paper. An evaluation of the precision and recall of our proposed approach is shown and compared to well-known methods for processing handwriting. Finally a summary and an outlook are given

    Computerized Hittite Cuneiform Sign Recognition and Knowledge-Based System Application Examples

    Get PDF
    The Hittites had lived in Anatolia more than 4000 years ago. The Hittite language is one of the oldest and may be the only one still readable and grammar rules are known member of Indo-European language family. The Hittites had a cuneiform script of their own written on soft clay pads or tablets. Tablets made durable and permanent by baking them after writing with some tools. That is why they could endure for thousands of years buried in the ground. The study of Hittite language has been made manually on the Hittite cuneiform tablets. Unfortunately, field scientists have read and translated only a relatively small number of unearthed tablets. Many more tablets are still waiting under and over ground in Anatolia for reading and translation into various languages. To read and translate the cuneiform signs, using computeraided techniques would be a significant contribution not only to Anatolian and Turkish but also to human history. In this paper, recognition of Hittite cuneiform signs by using computer based image-processing techniques is reported. Additionally, uses of data-mining applications are also included in the paper. Most importantly, the authors also demonstrated feasibility of an expert system on the Hittite cuneiform script

    Surface analysis and visualization from multi-light image collections

    Get PDF
    Multi-Light Image Collections (MLICs) are stacks of photos of a scene acquired with a fixed viewpoint and a varying surface illumination that provides large amounts of visual and geometric information. Over the last decades, a wide variety of methods have been devised to extract information from MLICs and have shown its use in different application domains to support daily activities. In this thesis, we present methods that leverage a MLICs for surface analysis and visualization. First, we provide background information: acquisition setup, light calibration and application areas where MLICs have been successfully used for the research of daily analysis work. Following, we discuss the use of MLIC for surface visualization and analysis and available tools used to support the analysis. Here, we discuss methods that strive to support the direct exploration of the captured MLIC, methods that generate relightable models from MLIC, non-photorealistic visualization methods that rely on MLIC, methods that estimate normal map from MLIC and we point out visualization tools used to do MLIC analysis. In chapter 3 we propose novel benchmark datasets (RealRTI, SynthRTI and SynthPS) that can be used to evaluate algorithms that rely on MLIC and discusses available benchmark for validation of photometric algorithms that can be also used to validate other MLIC-based algorithms. In chapter 4, we evaluate the performance of different photometric stereo algorithms using SynthPS for cultural heritage applications. RealRTI and SynthRTI have been used to evaluate the performance of (Neural)RTI method. Then, in chapter 5, we present a neural network-based RTI method, aka NeuralRTI, a framework for pixel-based encoding and relighting of RTI data. In this method using a simple autoencoder architecture, we show that it is possible to obtain a highly compressed representation that better preserves the original information and provides increased quality of virtual images relighted from novel directions, particularly in the case of challenging glossy materials. Finally, in chapter 6, we present a method for the detection of crack on the surface of paintings from multi-light image acquisitions and that can be used as well on single images and conclude our presentation

    3D high resolution techniques applied on small and medium size objects: from the analysis of the process towards quality assessment

    Get PDF
    The need for metric data acquisition is an issue strictly related to the human capability of describing the world with rigorous and repeatable methods. From the invention of photography to the development of advanced computers, the metric data acquisition has been subjected to rapid mutation, and nowadays there exists a strict connection between metric data acquisition and image processing, Computer Vision and Artificial Intelligence. The sensor devices for the 3D model generation are various and characterized by different functioning principles. In this work, optical passive and active sensors are treated, focusing specifically on close-range photogrammetry, Time of Flight (ToF) sensors and Structured-light scanners (SLS). Starting from the functioning principles of the techniques and showing some issues related to them, the work highlights their potentialities, analyzing the fundamental and most critical steps of the process leading to the quality assessment of the data. Central themes are the instruments calibration, the acquisition plan and the interpretation of the final results. The capability of the acquisition techniques to satisfy unconventional requirements in the field of Cultural Heritage is also shown. The thesis starts with an overview about the history and developments of 3D metric data acquisition. Chapter 1 treats the Human Vision System and presents a complete overview of 3D sensing devices. Chapter 2 starts from the enunciation of the basic principle of close-range photogrammetry considering digital cameras functioning principles, calibration issues, and the process leading to the 3D mesh reconstruction. The case of multi-image acquisition is analyzed, deepening the quality assessment of the photogrammetric process through a case study. Chapter 3 is devoted to the range-based acquisition techniques, namely ToF laser scanners and SLSs. Lastly, Chapter 4 focuses on unconventional applications of the mentioned high-resolution acquisition techniques showing some examples of study cases in the field of Cultural Heritage

    Reconocimiento automático de un censo histórico impreso sin recursos lingüísticos

    Full text link
    [ES] El reconocimiento automático de documentos históricos impresos es actualmente un problema resuelto para muchas colecciones de datos. Sin embargo, los sistemas de reconocimiento automático de documentos históricos impresos aún deben resolver varios obstáculos inherentes al trabajo con documentos antiguos. La degradación del papel o las manchas pueden aumentar la dificultad del correcto reconocimiento de los caracteres. No obstante, dichos problemas se pueden paliar utilizando recursos lingüísticos para entrenar buenos modelos de lenguaje que disminuyan la tasa de error de los caracteres. En cambio, hay muchas colecciones como la que se presenta en este trabajo, compuestas por tablas que contienen principalmente números y nombres propios, para las que no se dispone. En este trabajo se muestra que el reconocimiento automático puede realizarse con éxito para una colección de documentos sin utilizar ningún recurso lingüístico. Este proyecto cubre la extracción de información y el proceso de OCR dirigido, especialmente diseñados para el reconocimiento automático de un censo español del siglo XIX, registrado en documentos impresos. Muchos de los problemas relacionados con los documentos históricos se resuelven utilizando una combinación de técnicas clásicas de visión por computador y aprendizaje neuronal profundo. Los errores, como los caracteres mal reconocidos, son detectados y corregidos gracias a la información redundante que contiene el censo. Dada la importancia de este censo español para la realización de estudios demográficos, este trabajo da un paso más e introduce un modelo demostrador que facilita la investigación sobre este corpus mediante la indexación de los datos.[EN] Automatic recognition of typeset historical documents is currently a solved problem for many collections of data. However, systems for automatic recognition of typeset historical documents still need to address several issues inherent to working with this kind of documents. Degradation of the paper or smudges can increase the difficulty of correctly recognizing characters, problems that can be alleviated by using linguistic resources for training good language models which decrease the character error rate. Nonetheless, there are many collections such as the one presented in this paper, composed of tables that contain mainly numbers and proper names, for which a language model is neither available nor useful. This paper illustrates that automatic recognition can be done successfully for a collection of documents without using any linguistic resources. The paper covers the information extraction and the targeted OCR process, specially designed for the automatic recognition of a Spanish census from the XIX century, registered in printed documents. Many of the problems related to historical documents are overcame by using a combination of classical computer vision techniques and deep learning. Errors, such as miss-recognized characters, are detected and corrected thanks to redundant information that the census contains. Given the importance of this Spanish census for conducting demographic studies, this paper goes a step forward and introduces a demonstrator model to facilitate researching on this corpus by indexing the data.This work has been partially supported by the BBVA Fundation, as a collaboration between the PRHLT team in charge of the HisClima project and the ESPAREL project.Anitei, D. (2021). Reconocimiento automático de un censo histórico impreso sin recursos lingüísticos. Universitat Politècnica de València. http://hdl.handle.net/10251/172694TFG

    Development of Mining Sector Applications for Emerging Remote Sensing and Deep Learning Technologies

    Get PDF
    This thesis uses neural networks and deep learning to address practical, real-world problems in the mining sector. The main focus is on developing novel applications in the area of object detection from remotely sensed data. This area has many potential mining applications and is an important part of moving towards data driven strategic decision making across the mining sector. The scientific contributions of this research are twofold; firstly, each of the three case studies demonstrate new applications which couple remote sensing and neural network based technologies for improved data driven decision making. Secondly, the thesis presents a framework to guide implementation of these technologies in the mining sector, providing a guide for researchers and professionals undertaking further studies of this type. The first case study builds a fully connected neural network method to locate supporting rock bolts from 3D laser scan data. This method combines input features from the remote sensing and mobile robotics research communities, generating accuracy scores up to 22% higher than those found using either feature set in isolation. The neural network approach also is compared to the widely used random forest classifier and is shown to outperform this classifier on the test datasets. Additionally, the algorithms’ performance is enhanced by adding a confusion class to the training data and by grouping the output predictions using density based spatial clustering. The method is tested on two datasets, gathered using different laser scanners, in different types of underground mines which have different rock bolting patterns. In both cases the method is found to be highly capable of detecting the rock bolts with recall scores of 0.87-0.96. The second case study investigates modern deep learning for LiDAR data. Here, multiple transfer learning strategies and LiDAR data representations are examined for the task of identifying historic mining remains. A transfer learning approach based on a Lunar crater detection model is used, due to the task similarities between both the underlying data structures and the geometries of the objects to be detected. The relationship between dataset resolution and detection accuracy is also examined, with the results showing that the approach is capable of detecting pits and shafts to a high degree of accuracy with precision and recall scores between 0.80-0.92, provided the input data is of sufficient quality and resolution. Alongside resolution, different LiDAR data representations are explored, showing that the precision-recall balance varies depending on the input LiDAR data representation. The third case study creates a deep convolutional neural network model to detect artisanal scale mining from multispectral satellite data. This model is trained from initialisation without transfer learning and demonstrates that accurate multispectral models can be built from a smaller training dataset when appropriate design and data augmentation strategies are adopted. Alongside the deep learning model, novel mosaicing algorithms are developed both to improve cloud cover penetration and to decrease noise in the final prediction maps. When applied to the study area, the results from this model provide valuable information about the expansion, migration and forest encroachment of artisanal scale mining in southwestern Ghana over the last four years. Finally, this thesis presents an implementation framework for these neural network based object detection models, to generalise the findings from this research to new mining sector deep learning tasks. This framework can be used to identify applications which would benefit from neural network approaches; to build the models; and to apply these algorithms in a real world environment. The case study chapters confirm that the neural network models are capable of interpreting remotely sensed data to a high degree of accuracy on real world mining problems, while the framework guides the development of new models to solve a wide range of related challenges
    corecore