1,480 research outputs found

    Image and interpretation using artificial intelligence to read ancient Roman texts

    Get PDF
    The ink and stylus tablets discovered at the Roman Fort of Vindolanda are a unique resource for scholars of ancient history. However, the stylus tablets have proved particularly difficult to read. This paper describes a system that assists expert papyrologists in the interpretation of the Vindolanda writing tablets. A model-based approach is taken that relies on models of the written form of characters, and statistical modelling of language, to produce plausible interpretations of the documents. Fusion of the contributions from the language, character, and image feature models is achieved by utilizing the GRAVA agent architecture that uses Minimum Description Length as the basis for information fusion across semantic levels. A system is developed that reads in image data and outputs plausible interpretations of the Vindolanda tablets

    A human computer interactions framework for biometric user identification

    Get PDF
    Computer assisted functionalities and services have saturated our world becoming such an integral part of our daily activities that we hardly notice them. In this study we are focusing on enhancements in Human-Computer Interaction (HCI) that can be achieved by natural user recognition embedded in the employed interaction models. Natural identification among humans is mostly based on biometric characteristics representing what-we-are (face, body outlook, voice, etc.) and how-we-behave (gait, gestures, posture, etc.) Following this observation, we investigate different approaches and methods for adapting existing biometric identification methods and technologies to the needs of evolving natural human computer interfaces

    Freeform User Interfaces for Graphical Computing

    Get PDF
    報告番号: 甲15222 ; 学位授与年月日: 2000-03-29 ; 学位の種別: 課程博士 ; 学位の種類: 博士(工学) ; 学位記番号: 博工第4717号 ; 研究科・専攻: 工学系研究科情報工学専

    SUSIG: an on-line signature database, associated protocols and benchmark results

    Get PDF
    We present a new online signature database (SUSIG). The database consists of two parts that are collected using different pressure-sensitive tablets ( one with and the other without an LCD display). A total of 100 people contributed to each part, resulting in a database of more than 3,000 genuine signatures and 2,000 skilled forgeries. The genuine signatures in the database are real signatures of the contributors. In collecting skilled forgeries, forgers were shown the signing process on the monitor and were given a chance to practice. Furthermore, for a subset of the forgeries ( highly skilled forgeries), this animation was mapped onto the LCD screen of the tablet so that the forgers could trace over the mapped signature. Forgers in this group were also informed of how close they were to the reference signature, so that they could improve their forgery quality. We describe the signature acquisition process and several verification protocols for this database. We also report the performance of a state-of-the-art signature verification system using the associated protocols. The results show that the highly skilled forgery set is significantly more difficult compared to the skilled forgery set, providing researchers with challenging forgeries. The database is available through http://icproxy.sabanciuniv.edu:215

    SUSIG: An On-line Handwritten Signature Database, Associated Protocols and Benchmark Results”

    Get PDF
    In this paper we describe a new online signature database which is available for use in developing or testing signature verification systems. The SUSIG database consists of two parts, collected using different pressure sensitive tablets (one with and one without LCD display). A total of 100 people contributed to each part, resulting in a database of more than 3000 genuine and 2000 skilled forgery signatures. One of the greatest problems in constructing such a database is obtaining skilled forgeries: people who donate to a database do not have the same motivation, nor the acquired skill of a true forger intent on passing as the claimed identity. In this database, skilled forgeries were collected such that forgers saw the actual signing process played-back on the monitor and had a chance of practicing. Furthermore, for a subset of the skilled forgeries (highly skilled forgeries), the animation was mapped onto the LCD screen of the tablet so that the forgers could watch, as well as trace over the signature. Forgers in this group were also informed of how close they were to the reference signatures, so that they could improve the forgery and forgeries that were visibly dissimilar were not submitted. We describe the signature acquisition process, approaches used to collect skilled forgeries, and verification protocols which should be followed while assessing performance results. We also report performance of a state of the art online signature verification algorithm using the SUSIG database and the associated protocols. The results of this system show that the highly skilled forgery set composed of traced signatures is more difficult compared to the skilled forgery set. Furthermore, single session protocols are easier than across-session protocols. The database is made available for academic purposes through http://biometrics.sabanciuniv.edu/SUSIG

    DeepScribe: Localization and Classification of Elamite Cuneiform Signs Via Deep Learning

    Full text link
    Twenty-five hundred years ago, the paperwork of the Achaemenid Empire was recorded on clay tablets. In 1933, archaeologists from the University of Chicago's Oriental Institute (OI) found tens of thousands of these tablets and fragments during the excavation of Persepolis. Many of these tablets have been painstakingly photographed and annotated by expert cuneiformists, and now provide a rich dataset consisting of over 5,000 annotated tablet images and 100,000 cuneiform sign bounding boxes. We leverage this dataset to develop DeepScribe, a modular computer vision pipeline capable of localizing cuneiform signs and providing suggestions for the identity of each sign. We investigate the difficulty of learning subtasks relevant to cuneiform tablet transcription on ground-truth data, finding that a RetinaNet object detector can achieve a localization mAP of 0.78 and a ResNet classifier can achieve a top-5 sign classification accuracy of 0.89. The end-to-end pipeline achieves a top-5 classification accuracy of 0.80. As part of the classification module, DeepScribe groups cuneiform signs into morphological clusters. We consider how this automatic clustering approach differs from the organization of standard, printed sign lists and what we may learn from it. These components, trained individually, are sufficient to produce a system that can analyze photos of cuneiform tablets from the Achaemenid period and provide useful transliteration suggestions to researchers. We evaluate the model's end-to-end performance on locating and classifying signs, providing a roadmap to a linguistically-aware transliteration system, then consider the model's potential utility when applied to other periods of cuneiform writing.Comment: Currently under review in the ACM JOCC
    corecore