131 research outputs found

    Geospatial Analysis and Modeling of Textual Descriptions of Pre-modern Geography

    Get PDF
    Textual descriptions of pre-modern geography offer a different view of classical geography. The descriptions have been produced when none of the modern geographical concepts and tools were available. In this dissertation, we study pre-modern geography by primarily finding the existing structures of the descriptions and different cases of geographical data. We first explain four major geographical cases in pre-modern Arabic sources: gazetteer, administrative hierarchies, routes, and toponyms associated with people. Focusing on hierarchical divisions and routes, we offer approaches for manual annotation of administrative hierarchies and route sections as well as a semi-automated toponyms annotation. The latter starts with a fuzzy search of toponyms from an authority list and applies two different extrapolation models to infer true or false values, based on the context, for disambiguating the automatically annotated toponyms. Having the annotated data, we introduce mathematical models to shape and visualize regions based on the description of administrative hierarchies. Moreover, we offer models for comparing hierarchical divisions and route networks from different sources. We also suggest approaches to approximate geographical coordinates for places that do not have geographical coordinates - we call them unknown places - which is a major issue in visualization of pre-modern places on map. The final chapter of the dissertation introduces the new version of al-Ṯurayyā, a gazetteer and a spatial model of the classical Islamic world using georeferenced data of a pre-modern atlas with more than 2, 000 toponyms and routes. It offers search, path finding, and flood network functionalities as well as visualizations of regions using one of the models that we describe for regions. However the gazetteer is designed using the classical Islamic world data, the spatial model and features can be used for similarly prepared datasets.:1 Introduction 1 2 Related Work 8 2.1 GIS 8 2.2 NLP, Georeferencing, Geoparsing, Annotation 10 2.3 Gazetteer 15 2.4 Modeling 17 3 Classical Geographical Cases 20 3.1 Gazetteer 21 3.2 Routes and Travelogues 22 3.3 Administrative Hierarchy 24 3.4 Geographical Aspects of Biographical Data 25 4 Annotation and Extraction 27 4.1 Annotation 29 4.1.1 Manual Annotation of Geographical Texts 29 4.1.1.1 Administrative Hierarchy 30 4.1.1.2 Routes and Travelogues 32 4.1.2 Semi-Automatic Toponym Annotation 34 4.1.2.1 The Annotation Process 35 4.1.2.2 Extrapolation Models 37 4.1.2.2.1 Frequency of Toponymic N-grams 37 4.1.2.2.2 Co-occurrence Frequencies 38 4.1.2.2.3 A Supervised ML Approach 40 4.1.2.3 Summary 45 4.2 Data Extraction and Structures 45 4.2.1 Administrative Hierarchy 45 4.2.2 Routes and Distances 49 5 Modeling Geographical Data 51 5.1 Mathematical Models for Administrative Hierarchies 52 5.1.1 Sample Data 53 5.1.2 Quadtree 56 5.1.3 Voronoi Diagram 58 5.1.4 Voronoi Clippings 62 5.1.4.1 Convex Hull 62 5.1.4.2 Concave Hull 63 5.1.5 Convex Hulls 65 5.1.6 Concave Hulls 67 5.1.7 Route Network 69 5.1.8 Summary of Models for Administrative Hierarchy 69 5.2 Comparison Models 71 5.2.1 Hierarchical Data 71 5.2.1.1 Test Data 73 5.2.2 Route Networks 76 5.2.2.1 Post-processing 81 5.2.2.2 Applications 82 5.3 Unknown Places 84 6 Al-Ṯurayyā 89 6.1 Introducing al-Ṯurayyā 90 6.2 Gazetteer 90 6.3 Spatial Model 91 6.3.1 Provinces and Administrative Divisions 93 6.3.2 Pathfinding and Itineraries 93 6.3.3 Flood Network 96 6.3.4 Path Alignment Tool 97 6.3.5 Data Structure 99 6.3.5.1 Places 100 6.3.5.2 Routes and Distances 100 7 Conclusions and Further Work 10

    The TXM Portal Software giving access to Old French Manuscripts Online

    Get PDF
    Texte intégral en ligne : http://www.lrec-conf.org/proceedings/lrec2012/workshops/13.ProceedingsCultHeritage.pdfInternational audiencehttp://www.lrec-conf.org/proceedings/lrec2012/workshops/13.ProceedingsCultHeritage.pdf This paper presents the new TXM software platform giving online access to Old French Text Manuscripts images and tagged transcriptions for concordancing and text mining. This platform is able to import medieval sources encoded in XML according to the TEI Guidelines for linking manuscript images to transcriptions, encode several diplomatic levels of transcription including abbreviations and word level corrections. It includes a sophisticated tokenizer able to deal with TEI tags at different levels of linguistic hierarchy. Words are tagged on the fly during the import process using IMS TreeTagger tool with a specific language model. Synoptic editions displaying side by side manuscript images and text transcriptions are automatically produced during the import process. Texts are organized in a corpus with their own metadata (title, author, date, genre, etc.) and several word properties indexes are produced for the CQP search engine to allow efficient word patterns search to build different type of frequency lists or concordances. For syntactically annotated texts, special indexes are produced for the Tiger Search engine to allow efficient syntactic concordances building. The platform has also been tested on classical Latin, ancient Greek, Old Slavonic and Old Hieroglyphic Egyptian corpora (including various types of encoding and annotations)

    The Trouble with Knowing: Wikipedian consensus and the political design of encyclopedic media

    Get PDF
    Encyclopedias are interfaces between knowing and the unknown. They are devices that negotiate the middle ground between incompatible knowledge systems while also performing as dream machines that explore the political outlines of an enlightened society. Building upon the insights from critical feminist theory, media archaeology, and science and technology studies, the dissertation investigates how utopian and impossible desires of encyclopedic media have left a wake of unresolvable epistemological crises. In a 2011 survey of editors of the online encyclopedia Wikipedia, it was reported that 87 per cent of Wikipedians identified as men. This statistic flew in the face of Wikipedias utopian promise that it was an encyclopedia that anyone can edit. Despite the early optimism and efforts to reduce this disparity, Wikipedias parent organization acknowledged its inability to significantly make Wikipedia more equitable. This matter of concern raised two questions: What kinds of knowing subjects is Wikipedia designed to cultivate and what does this conflict over who is included and excluded within Wikipedia tell us about the utopian dreams that are woven into encyclopedic media? This dissertation argues that answering these troubling questions requires an examination of the details of the present, but also the impossible desires that Wikipedia inherited from its predecessors. The analysis of these issues begins with a genealogy of encyclopedias, encyclopedists, encyclopedic aesthetics, and encyclopedisms. It is followed by an archeology of the twentieth century deployment of consensus as an encyclopedic and political program. The third part examines how Wikipedia translated the imaginary ideal of consensus into a cultural technique. Finally, the dissertation mobilizes these analyses to contextualize how consensus was used to limit the dissenting activities of Wikipedia's Gender Gap Task Force. The dissertation demonstrates that the desire and design of encircling knowledge through consensus cultivated Wikipedias gender gap. In this context, if encyclopedic knowledge is to remain politically and culturally significant in the twenty-first century, it is necessary to tell a new story about encyclopedic media. It must be one where an attention to utopian imaginaries, practices, and techniques not only addresses how knowledge is communicated but also enables a sensitivity to the question of who can know

    The American Literature Scholar in the Digital Age

    Get PDF
    Essays reflecting on the development of the first wave of digital American literature scholarshi

    Traces of the Old, Uses of the New: The Emergence of Digital Literary Studies

    Get PDF
    Digital Humanities remains a contested, umbrella term covering many types of work in numerous disciplines, including literature, history, linguistics, classics, theater, performance studies, film, media studies, computer science, and information science. In Traces of the Old, Uses of the New: The Emergence of Digital Literary Studies, Amy Earhart stakes a claim for discipline-specific history of digital study as a necessary prelude to true progress in defining Digital Humanities as a shared set of interdisciplinary practices and interests. Traces of the Old, Uses of the New focuses on twenty-five years of developments, including digital editions, digital archives, e-texts, text mining, and visualization, to situate emergent products and processes in relation to historical trends of disciplinary interest in literary study. By reexamining the roil of theoretical debates and applied practices from the last generation of work in juxtaposition with applied digital work of the same period, Earhart also seeks to expose limitations in need of alternative methods—methods that might begin to deliver on the early (but thus far unfulfilled) promise that digitizing texts allows literature scholars to ask and answer questions in new and compelling ways. In mapping the history of digital literary scholarship, Earhart also seeks to chart viable paths to its future, and in doing this work in one discipline, this book aims to inspire similar work in others

    Atti del IX Convegno Annuale dell'Associazione per l'Informatica Umanistica e la Cultura Digitale (AIUCD). La svolta inevitabile: sfide e prospettive per l'Informatica Umanistica

    Get PDF
    Proceedings of the IX edition of the annual AIUCD conferenc

    Applying Wikipedia to Interactive Information Retrieval

    Get PDF
    There are many opportunities to improve the interactivity of information retrieval systems beyond the ubiquitous search box. One idea is to use knowledge bases—e.g. controlled vocabularies, classification schemes, thesauri and ontologies—to organize, describe and navigate the information space. These resources are popular in libraries and specialist collections, but have proven too expensive and narrow to be applied to everyday webscale search. Wikipedia has the potential to bring structured knowledge into more widespread use. This online, collaboratively generated encyclopaedia is one of the largest and most consulted reference works in existence. It is broader, deeper and more agile than the knowledge bases put forward to assist retrieval in the past. Rendering this resource machine-readable is a challenging task that has captured the interest of many researchers. Many see it as a key step required to break the knowledge acquisition bottleneck that crippled previous efforts. This thesis claims that the roadblock can be sidestepped: Wikipedia can be applied effectively to open-domain information retrieval with minimal natural language processing or information extraction. The key is to focus on gathering and applying human-readable rather than machine-readable knowledge. To demonstrate this claim, the thesis tackles three separate problems: extracting knowledge from Wikipedia; connecting it to textual documents; and applying it to the retrieval process. First, we demonstrate that a large thesaurus-like structure can be obtained directly from Wikipedia, and that accurate measures of semantic relatedness can be efficiently mined from it. Second, we show that Wikipedia provides the necessary features and training data for existing data mining techniques to accurately detect and disambiguate topics when they are mentioned in plain text. Third, we provide two systems and user studies that demonstrate the utility of the Wikipedia-derived knowledge base for interactive information retrieval

    Atti del IX Convegno Annuale AIUCD. La svolta inevitabile: sfide e prospettive per l'Informatica Umanistica.

    Get PDF
    La nona edizione del convegno annuale dell'Associazione per l'Informatica Umanistica e la Cultura Digitale (AIUCD 2020; Milano, 15-17 gennaio 2020) ha come tema “La svolta inevitabile: sfide e prospettive per l'Informatica Umanistica”, con lo specifico obiettivo di fornire un'occasione per riflettere sulle conseguenze della crescente diffusione dell’approccio computazionale al trattamento dei dati connessi all’ambito umanistico. Questo volume raccoglie gli articoli i cui contenuti sono stati presentati al convegno. A diversa stregua, essi affrontano il tema proposto da un punto di vista ora più teorico-metodologico, ora più empirico-pratico, presentando i risultati di lavori e progetti (conclusi o in corso) che considerino centrale il trattamento computazionale dei dati

    Machine Learning Algorithm for the Scansion of Old Saxon Poetry

    Get PDF
    Several scholars designed tools to perform the automatic scansion of poetry in many languages, but none of these tools deal with Old Saxon or Old English. This project aims to be a first attempt to create a tool for these languages. We implemented a Bidirectional Long Short-Term Memory (BiLSTM) model to perform the automatic scansion of Old Saxon and Old English poems. Since this model uses supervised learning, we manually annotated the Heliand manuscript, and we used the resulting corpus as labeled dataset to train the model. The evaluation of the performance of the algorithm reached a 97% for the accuracy and a 99% of weighted average for precision, recall and F1 Score. In addition, we tested the model with some verses from the Old Saxon Genesis and some from The Battle of Brunanburh, and we observed that the model predicted almost all Old Saxon metrical patterns correctly misclassified the majority of the Old English input verses

    The Object of Platform Studies: Relational Materialities and the Social Platform (the case of the Nintendo Wii)

    Get PDF
    Racing the Beam: The Atari Video Computer System,by Ian Bogost and Nick Montfort, inaugurated thePlatform Studies series at MIT Press in 2009.We’ve coauthored a new book in the series, Codename: Revolution: the Nintendo Wii Video Game Console. Platform studies is a quintessentially Digital Humanities approach, since it’s explicitly focused on the interrelationship of computing and cultural expression. According to the series preface, the goal of platform studies is “to consider the lowest level of computing systems and to understand how these systems relate to culture and creativity.”In practice, this involves paying close attentionto specific hardware and software interactions--to the vertical relationships between a platform’s multilayered materialities (Hayles; Kirschenbaum),from transistors to code to cultural reception. Any given act of platform-studies analysis may focus for example on the relationship between the chipset and the OS, or between the graphics processor and display parameters or game developers’ designs.In computing terms, platform is an abstraction(Bogost and Montfort), a pragmatic frame placed around whatever hardware-and-software configuration is required in order to build or run certain specificapplications (including creative works). The object of platform studies is thus a shifting series of possibility spaces, any number of dynamic thresholds between discrete levels of a system
    corecore