156 research outputs found

    Artificial Intelligence in Historical Document Analysis:Pattern recognition and machine learning techniques in the study of ancient manuscripts with a focus on the Dead Sea Scrolls

    Get PDF
    The Ph.D. thesis investigates the potential of artificial intelligence (AI) in analyzing ancient historical manuscripts, focusing on the Dead Sea Scrolls (DSS) images. The research employs several computer vision, pattern recognition, and machine learning techniques to address writer identification and dating challenges. An initial study highlights the successful application of character shape features, achieving high accuracy in identifying multiple authors within the DSS collection.After recognizing the crucial role of binarization (extracting ink traces from the background materials) for accurate writer identification, the thesis introduces BiNet, an artificial deep neural network. BiNet utilizes multispectral images and outperforms traditional models in binarizing highly degraded ancient manuscripts, facilitating improved calculation of textural and allographic features. Building upon the success of BiNet, the study identifies multiple authors for the Great Isaiah Scroll, one of the longest scrolls in the DSS collection. The quantitative findings propose a hypothesis contrary to established assumptions about the scroll's authorship.With the success of writer identification, the thesis employs support vector regression and a self-organizing time map for broader time period classification. Enoch, a Bayesian regression-based model for date prediction, integrates AI with radiocarbon dating, presenting a pioneering technique for estimating manuscript dates. The interdisciplinary fusion of AI with historical research enhances our understanding of writers' identities, the dating of ancient manuscripts, and the contextualization of historical narratives. The thesis advances methodologies for analyzing ancient manuscripts, contributing to improved interpretations of the past and laying the foundation for further interdisciplinary exploration in historical document analysis

    Artificial Intelligence in Historical Document Analysis:Pattern recognition and machine learning techniques in the study of ancient manuscripts with a focus on the Dead Sea Scrolls

    Get PDF
    The Ph.D. thesis investigates the potential of artificial intelligence (AI) in analyzing ancient historical manuscripts, focusing on the Dead Sea Scrolls (DSS) images. The research employs several computer vision, pattern recognition, and machine learning techniques to address writer identification and dating challenges. An initial study highlights the successful application of character shape features, achieving high accuracy in identifying multiple authors within the DSS collection.After recognizing the crucial role of binarization (extracting ink traces from the background materials) for accurate writer identification, the thesis introduces BiNet, an artificial deep neural network. BiNet utilizes multispectral images and outperforms traditional models in binarizing highly degraded ancient manuscripts, facilitating improved calculation of textural and allographic features. Building upon the success of BiNet, the study identifies multiple authors for the Great Isaiah Scroll, one of the longest scrolls in the DSS collection. The quantitative findings propose a hypothesis contrary to established assumptions about the scroll's authorship.With the success of writer identification, the thesis employs support vector regression and a self-organizing time map for broader time period classification. Enoch, a Bayesian regression-based model for date prediction, integrates AI with radiocarbon dating, presenting a pioneering technique for estimating manuscript dates. The interdisciplinary fusion of AI with historical research enhances our understanding of writers' identities, the dating of ancient manuscripts, and the contextualization of historical narratives. The thesis advances methodologies for analyzing ancient manuscripts, contributing to improved interpretations of the past and laying the foundation for further interdisciplinary exploration in historical document analysis

    Artificial Intelligence in Historical Document Analysis:Pattern recognition and machine learning techniques in the study of ancient manuscripts with a focus on the Dead Sea Scrolls

    Get PDF
    The Ph.D. thesis investigates the potential of artificial intelligence (AI) in analyzing ancient historical manuscripts, focusing on the Dead Sea Scrolls (DSS) images. The research employs several computer vision, pattern recognition, and machine learning techniques to address writer identification and dating challenges. An initial study highlights the successful application of character shape features, achieving high accuracy in identifying multiple authors within the DSS collection.After recognizing the crucial role of binarization (extracting ink traces from the background materials) for accurate writer identification, the thesis introduces BiNet, an artificial deep neural network. BiNet utilizes multispectral images and outperforms traditional models in binarizing highly degraded ancient manuscripts, facilitating improved calculation of textural and allographic features. Building upon the success of BiNet, the study identifies multiple authors for the Great Isaiah Scroll, one of the longest scrolls in the DSS collection. The quantitative findings propose a hypothesis contrary to established assumptions about the scroll's authorship.With the success of writer identification, the thesis employs support vector regression and a self-organizing time map for broader time period classification. Enoch, a Bayesian regression-based model for date prediction, integrates AI with radiocarbon dating, presenting a pioneering technique for estimating manuscript dates. The interdisciplinary fusion of AI with historical research enhances our understanding of writers' identities, the dating of ancient manuscripts, and the contextualization of historical narratives. The thesis advances methodologies for analyzing ancient manuscripts, contributing to improved interpretations of the past and laying the foundation for further interdisciplinary exploration in historical document analysis

    Oil Spill Segmentation using Deep Encoder-Decoder models

    Full text link
    Crude oil is an integral component of the modern world economy. With the growing demand for crude oil due to its widespread applications, accidental oil spills are unavoidable. Even though oil spills are in and themselves difficult to clean up, the first and foremost challenge is to detect spills. In this research, the authors test the feasibility of deep encoder-decoder models that can be trained effectively to detect oil spills. The work compares the results from several segmentation models on high dimensional satellite Synthetic Aperture Radar (SAR) image data. Multiple combinations of models are used in running the experiments. The best-performing model is the one with the ResNet-50 encoder and DeepLabV3+ decoder. It achieves a mean Intersection over Union (IoU) of 64.868% and a class IoU of 61.549% for the "oil spill" class when compared with the current benchmark model, which achieved a mean IoU of 65.05% and a class IoU of 53.38% for the "oil spill" class.Comment: 10 pages, 8 figures, 4 table

    Oil Spill Segmentation using Deep Encoder-Decoder models

    Get PDF
    Crude oil is an integral component of the modern world economy. With the growing demand for crude oil due to its widespread applications, accidental oil spills are unavoidable. Even though oil spills are in and themselves difficult to clean up, the first and foremost challenge is to detect spills. In this research, the authors test the feasibility of deep encoder-decoder models that can be trained effectively to detect oil spills. The work compares the results from several segmentation models on high dimensional satellite Synthetic Aperture Radar (SAR) image data. Multiple combinations of models are used in running the experiments. The best-performing model is the one with the ResNet-50 encoder and DeepLabV3+ decoder. It achieves a mean Intersection over Union (IoU) of 64.868% and a class IoU of 61.549% for the "oil spill" class when compared with the current benchmark model, which achieved a mean IoU of 65.05% and a class IoU of 53.38% for the "oil spill" class

    Artificial intelligence based writer identification generates new evidence for the unknown scribes of the Dead Sea Scrolls exemplified by the Great Isaiah Scroll (1QIsaa)

    Get PDF
    The Dead Sea Scrolls are tangible evidence of the Bible's ancient scribal culture. This study takes an innovative approach to palaeography-the study of ancient handwriting-as a new entry point to access this scribal culture. One of the problems of palaeography is to determine writer identity or difference when the writing style is near uniform. This is exemplified by the Great Isaiah Scroll (1QIsaa). To this end, we use pattern recognition and artificial intelligence techniques to innovate the palaeography of the scrolls and to pioneer the microlevel of individual scribes to open access to the Bible's ancient scribal culture. We report new evidence for a breaking point in the series of columns in this scroll. Without prior assumption of writer identity, based on point clouds of the reduced-dimensionality feature-space, we found that columns from the first and second halves of the manuscript ended up in two distinct zones of such scatter plots, notably for a range of digital palaeography tools, each addressing very different featural aspects of the script samples. In a secondary, independent, analysis, now assuming writer difference and using yet another independent feature method and several different types of statistical testing, a switching point was found in the column series. A clear phase transition is apparent in columns 27-29. We also demonstrated a difference in distance variances such that the variance is higher in the second part of the manuscript. Given the statistically significant differences between the two halves, a tertiary, post-hoc analysis was performed using visual inspection of character heatmaps and of the most discriminative Fraglet sets in the script. Demonstrating that two main scribes, each showing different writing patterns, were responsible for the Great Isaiah Scroll, this study sheds new light on the Bible's ancient scribal culture by providing new, tangible evidence that ancient biblical texts were not copied by a single scribe only but that multiple scribes, while carefully mirroring another scribe's writing style, could closely collaborate on one particular manuscript

    Writer adaptation for offline text recognition: An exploration of neural network-based methods

    Full text link
    Handwriting recognition has seen significant success with the use of deep learning. However, a persistent shortcoming of neural networks is that they are not well-equipped to deal with shifting data distributions. In the field of handwritten text recognition (HTR), this shows itself in poor recognition accuracy for writers that are not similar to those seen during training. An ideal HTR model should be adaptive to new writing styles in order to handle the vast amount of possible writing styles. In this paper, we explore how HTR models can be made writer adaptive by using only a handful of examples from a new writer (e.g., 16 examples) for adaptation. Two HTR architectures are used as base models, using a ResNet backbone along with either an LSTM or Transformer sequence decoder. Using these base models, two methods are considered to make them writer adaptive: 1) model-agnostic meta-learning (MAML), an algorithm commonly used for tasks such as few-shot classification, and 2) writer codes, an idea originating from automatic speech recognition. Results show that an HTR-specific version of MAML known as MetaHTR improves performance compared to the baseline with a 1.4 to 2.0 improvement in word error rate (WER). The improvement due to writer adaptation is between 0.2 and 0.7 WER, where a deeper model seems to lend itself better to adaptation using MetaHTR than a shallower model. However, applying MetaHTR to larger HTR models or sentence-level HTR may become prohibitive due to its high computational and memory requirements. Lastly, writer codes based on learned features or Hinge statistical features did not lead to improved recognition performance.Comment: 21 pages including appendices, 6 figures, 10 table
    corecore