27 research outputs found

    Automatische Vektorzeichnungen von Keilschrifttafeln aus 3D-Messdaten mit dem GigaMesh Software-Framework

    Get PDF
    Aufgrund ihrer gekrĂŒmmten Form sind Keilschrifttafeln auf Photos hĂ€ufig nicht vollstĂ€ndig lesbar. Eine besser lesbare zweidimensionale Darstellung lĂ€sst sich durch Anwendung eines lokalen Filterverfahrens auf die hoch aufgelösten 3D-Scandaten der Tafeln erreichen. Dieses Verfahren berechnet die lokale KrĂŒmmung auf mehreren Maßstabsebenen mithilfe von virtuellen Kugeln unterschiedlicher GrĂ¶ĂŸe, deren Schnittmenge mit dem Dreiecksnetz der OberflĂ€che ermittelt wird. Mit diesem Ansatz lĂ€sst sich die Lesbarkeit von Keilschrifttafeln und alten Steininschriften deutlich erhöhen. Bei Keilschrifttafeln ist es darĂŒber hinaus mit dem hier vorgestellten GigaMesh Software-Framework möglich, die Keile zu erkennen und eine automatische Umzeichnung (Vektordarstellung) zu erstellen

    Citizen science for cuneiform studies

    No full text
    This paper examines the potential applications of Citizen Science and Open Linked Data within a critical Web Science framework. Described here is a work-inprocess concerning an interdisciplinary, multiinstitutional project for the digitization, annotation and online dissemination of a large corpus of written material from ancient Mesopotamia. The paper includes an outline of the problems presented by a large, heterogeneous and incomplete dataset, as well as a discussion of the potential of Citizen Science as a potential solution, combining both technical and social aspects. Drawing inspiration from other successful Citizen Science projects, the current paper suggests a process for capturing and enriching the data in ways which can address not only the challenges of the current data set, but also similar issues arising elsewhere on the wider Web

    CNN based Cuneiform Sign Detection Learned from Annotated 3D Renderings and Mapped Photographs with Illumination Augmentation

    Full text link
    Motivated by the challenges of the Digital Ancient Near Eastern Studies (DANES) community, we develop digital tools for processing cuneiform script being a 3D script imprinted into clay tablets used for more than three millennia and at least eight major languages. It consists of thousands of characters that have changed over time and space. Photographs are the most common representations usable for machine learning, while ink drawings are prone to interpretation. Best suited 3D datasets that are becoming available. We created and used the HeiCuBeDa and MaiCuBeDa datasets, which consist of around 500 annotated tablets. For our novel OCR-like approach to mixed image data, we provide an additional mapping tool for transferring annotations between 3D renderings and photographs. Our sign localization uses a RepPoints detector to predict the locations of characters as bounding boxes. We use image data from GigaMesh's MSII (curvature, see https://gigamesh.eu) based rendering, Phong-shaded 3D models, and photographs as well as illumination augmentation. The results show that using rendered 3D images for sign detection performs better than other work on photographs. In addition, our approach gives reasonably good results for photographs only, while it is best used for mixed datasets. More importantly, the Phong renderings, and especially the MSII renderings, improve the results on photographs, which is the largest dataset on a global scale.Comment: This paper was accepted to ICCV23 and includes the DOI for an Open Access Dataset with annotated cuneiform scrip

    Restoration of Fragmentary Babylonian Texts Using Recurrent Neural Networks

    Full text link
    The main source of information regarding ancient Mesopotamian history and culture are clay cuneiform tablets. Despite being an invaluable resource, many tablets are fragmented leading to missing information. Currently these missing parts are manually completed by experts. In this work we investigate the possibility of assisting scholars and even automatically completing the breaks in ancient Akkadian texts from Achaemenid period Babylonia by modelling the language using recurrent neural networks

    Web-based scientific exploration and analysis of 3D scanned cuneiform datasets for collaborative research

    Get PDF
    The three-dimensional cuneiform script is one of the oldest known writing systems and a central object of research in Ancient Near Eastern Studies and Hittitology. An important step towards the understanding of the cuneiform script is the provision of opportunities and tools for joint analysis. This paper presents an approach that contributes to this challenge: a collaborative compatible web-based scientific exploration and analysis of 3D scanned cuneiform fragments. The WebGL -based concept incorporates methods for compressed web-based content delivery of large 3D datasets and high quality visualization. To maximize accessibility and to promote acceptance of 3D techniques in the field of Hittitology, the introduced concept is integrated into the Hethitologie-Portal Mainz, an established leading online research resource in the field of Hittitology, which until now exclusively included 2D content. The paper shows that increasing the availability of 3D scanned archaeological data through a web-based interface can provide significant scientific value while at the same time finding a trade-off between copyright induced restrictions and scientific usability

    Cuneiform Detection in Vectorized Raster Images

    Get PDF
    Documents written in cuneiform script are one of the largest sources about ancient history. The script is written by imprinting wedges (Latin: cunei) into clay tablets and was used for almost four millennia. This three-dimensional script is typically transcribed by hand with ink on paper. These transcriptions are available in large quantities as raster graphics by online sources like the Cuneiform Database Library Initative (CDLI). Within this article we present an approach to extract Scalable Vector Graphics (SVG) in 2D from raster images as we previously did from 3D models. This enlarges our basis of data sets for tasks like word-spotting. In the first step of vectorizing the raster images we extract smooth outlines and a minimal graph representation of sets of wedges, i.e., main components of cuneiform characters. Then we discretize these outlines followed by a Delaunay triangulation to extract skeletons of sets of connected wedges. To separate the sets into single wedges we experimented with different conflict resolution strategies and candidate pruning. A thorough evaluation of our methods and its parameters on real word data shows that the wedges are extracted with a true positive rate of 0.98. At the same time the false positive rate is 0.2, which requires future extension by using statistics about geometric configurations of wedge sets

    New visualization techniques for cuneiform texts and sealings

    Get PDF

    Medieval Coins of Three Different Types and of Various States of Preservation

    Get PDF
    We have developed a device for digitizing coins using photometric stereo, which serves two purposes. For inventory it allows identifying a coin, which has been digitized before, and avoids mixing up similar coins. This is important because the classic marking directly on the object is not possible without obscuring the design. Secondly, one can view a digitized coin on screen and interactively change the light direction similar to Reflectance Transformation Imaging (RTI). This enables researchers to better recognize details, especially in the case of often corroded coin finds, and also enables location independent investigations and exchanges. The digitization result consists of color (albedo) and normal information for each pixel, which allows to analyze topographic properties apart from color. We think that this type of data can enable the development of new algorithmic analysis methods. The classification of coins, especially medieval coins, requires specialist knowledge and a great deal of experience. Digital support can help archaeologists without numismatic knowledge to classify coins correctly by providing initial clues and showing, which coins in a comparative data base show similarities with a newly found coin. For the development of such digital tools, we provide a selection of coin data as an open dataset. For the dataset we have selected samples of medieval coins from three different types, which are described in Mehl 499, 595 and Bahrfeldt 19. The dataset contains 2D and 3D data only for their obverses. A possible research direction could be to measure similarity between these samples, such that samples of the same type are more similar than samples of different type. Many samples show only a part of a complete coin. This increases the challenge e.g. for shape correspondence. Multi-scale integral invariant (MSII) features included with the 3D data may help to focus on minting features
    corecore