134,476 research outputs found

    Novel Event Detection and Classification for Historical Texts

    Get PDF
    Event processing is an active area of research in the Natural Language Processing community but resources and automatic systems developed so far have mainly addressed contemporary texts. However, the recognition and elaboration of events is a crucial step when dealing with historical texts particularly in the current era of massive digitization of historical sources: research in this domain can lead to the development of methodologies and tools that can assist historians in enhancing their work, while having an impact also on the field of Natural Language Processing. Our work aims at shedding light on the complex concept of events when dealing with historical texts. More specifically, we introduce new annotation guidelines for event mentions and types, categorised into 22 classes. Then, we annotate a historical corpus accordingly, and compare two approaches for automatic event detection and classification following this novel scheme. We believe that this work can foster research in a field of inquiry so far underestimated in the area of Temporal Information Processing. To this end, we release new annotation guidelines, a corpus and new models for automatic annotation

    Text Mining Oral Histories in Historical Archaeology

    Get PDF
    Advances in text mining and natural language processing methodologies have the potential to productively inform historical archaeology and oral history research. However, text mining methods are largely developed in the context of contemporary big data and publicly available texts, limiting the applicability of these tools in the context of historical and archaeological interpretation. Given the ability of text analysis to efficiently process and analyze large volumes of data, the potential for such tools to meaningfully inform historical archaeological research is significant, particularly for working with digitized data repositories or lengthy texts. Using oral histories recorded about a half-century ago from the anthracite coal mining region of Pennsylvania, USA, we discuss recent methodological developments in text analysis methodologies. We suggest future pathways to bridge the gap between generalized text mining methods and the particular needs of working with historical and place-based texts

    A time-sensitive historical thesaurus-based semantic tagger for deep semantic annotation

    Get PDF
    Automatic extraction and analysis of meaning-related information from natural language data has been an important issue in a number of research areas, such as natural language processing (NLP), text mining, corpus linguistics, and data science. An important aspect of such information extraction and analysis is the semantic annotation of language data using a semantic tagger. In practice, various semantic annotation tools have been designed to carry out different levels of semantic annotation, such as topics of documents, semantic role labeling, named entities or events. Currently, the majority of existing semantic annotation tools identify and tag partial core semantic information in language data, but they tend to be applicable only for modern language corpora. While such semantic analyzers have proven useful for various purposes, a semantic annotation tool that is capable of annotating deep semantic senses of all lexical units, or all-words tagging, is still desirable for a deep, comprehensive semantic analysis of language data. With large-scale digitization efforts underway, delivering historical corpora with texts dating from the last 400 years, a particularly challenging aspect is the need to adapt the annotation in the face of significant word meaning change over time. In this paper, we report on the development of a new semantic tagger (the Historical Thesaurus Semantic Tagger), and discuss challenging issues we faced in this work. This new semantic tagger is built on existing NLP tools and incorporates a large-scale historical English thesaurus linked to the Oxford English Dictionary. Employing contextual disambiguation algorithms, this tool is capable of annotating lexical units with a historically-valid highly fine-grained semantic categorization scheme that contains about 225,000 semantic concepts and 4,033 thematic semantic categories. In terms of novelty, it is adapted for processing historical English data, with rich information about historical usage of words and a spelling variant normalizer for historical forms of English. Furthermore, it is able to make use of knowledge about the publication date of a text to adapt its output. In our evaluation, the system achieved encouraging accuracies ranging from 77.12% to 91.08% on individual test texts. Applying time-sensitive methods improved results by as much as 3.54% and by 1.72% on average

    Natural Language Processing for Teaching Ancient Languages

    Get PDF
    Konstantin Schulz shows various applications of natural language processing (NLP) to the field of Classics, especially to Latin texts. He addresses different levels of linguistic analysis while also highlighting educational benefits and important theoretical pitfalls, especially in vocabulary learning. NLP can solve some problems reasonably well, like tailoring exercises to the learners' current state of knowledge. However, some tasks still prove to be too difficult for machines at the moment, e.g. reliable and highly accurate parsing of syntax for historical languages

    Improving historical spelling normalization with bi-directional LSTMs and multi-task learning

    Get PDF
    Natural-language processing of historical documents is complicated by the abundance of variant spellings and lack of annotated data. A common approach is to normalize the spelling of historical words to modern forms. We explore the suitability of a deep neural network architecture for this task, particularly a deep bi-LSTM network applied on a character level. Our model compares well to previously established normalization algorithms when evaluated on a diverse set of texts from Early New High German. We show that multi-task learning with additional normalization data can improve our model's performance further.Comment: Accepted to COLING 201

    A Study of Techniques and Challenges in Text Recognition Systems

    Get PDF
    The core system for Natural Language Processing (NLP) and digitalization is Text Recognition. These systems are critical in bridging the gaps in digitization produced by non-editable documents, as well as contributing to finance, health care, machine translation, digital libraries, and a variety of other fields. In addition, as a result of the pandemic, the amount of digital information in the education sector has increased, necessitating the deployment of text recognition systems to deal with it. Text Recognition systems worked on three different categories of text: (a) Machine Printed, (b) Offline Handwritten, and (c) Online Handwritten Texts. The major goal of this research is to examine the process of typewritten text recognition systems. The availability of historical documents and other traditional materials in many types of texts is another major challenge for convergence. Despite the fact that this research examines a variety of languages, the Gurmukhi language receives the most focus. This paper shows an analysis of all prior text recognition algorithms for the Gurmukhi language. In addition, work on degraded texts in various languages is evaluated based on accuracy and F-measure

    Deep Learning for Period Classification of Historical Texts

    Get PDF
    In this study, we address the interesting task of classifying historical texts by their assumed period of writing. This task is useful in digital humanity studies where many texts have unidentified publication dates. For years, the typical approach for temporal text classification was supervised using machine-learning algorithms. These algorithms require careful feature engineering and considerable domain expertise to design a feature extractor to transform the raw text into a feature vector from which the classifier could learn to classify any unseen valid input. Recently, deep learning has produced extremely promising results for various tasks in natural language processing (NLP). The primary advantage of deep learning is that human engineers did not design the feature layers, but the features were extrapolated from data with a general-purpose learning procedure. We investigated deep learning models for period classification of historical texts. We compared three common models: paragraph vectors, convolutional neural networks (CNN), and recurrent neural networks (RNN). We demonstrate that the CNN and RNN models outperformed the paragraph vector model and supervised machine-learning algorithms. In addition, we constructed word embeddings for each time period and analyzed semantic changes of word meanings over time

    Historical Models and Serial Sources

    Get PDF
    Serial sources such as records, registers, and inventories are the ‘classic’ sources for quantitative history. Unstructured, narrative texts such as newspaper articles or reports were out of reach for historical analyses, both for practical reasons—availability, time needed for manual processing—and for methodological reasons: manual coding of texts is notoriously difficult and hampered by low inter-coder reliability. The recent availability of large amounts of digitized sources allows for the application of natural language processing, which has the potential to overcome these problems. However, the automatic evaluation of large amounts of texts—and historical texts in particular—for historical research also brings new challenges. First of all, it requires a source criticism that goes beyond the individual source and also considers the corpus as a whole. It is a well-known problem in corpus linguistics to determine the ‘balancedness’ of a corpus, but when analyzing the content of texts rather than ‘just’ the language, determining the ‘meaningfulness’ of a corpus is even more important. Second, automatic analyses require operationalizable descriptions of the information you are looking for. Third, automatically produced results require interpretation, in particular, when—as in history—the ultimate research question is qualitative, not quantitative. This, finally, poses the question, whether the insights gained could inform formal, i.e., machine-processable, models, which could serve as foundation and stepping stones for further research
    • 

    corecore