1,395 research outputs found

    A Study of Techniques and Challenges in Text Recognition Systems

    Get PDF
    The core system for Natural Language Processing (NLP) and digitalization is Text Recognition. These systems are critical in bridging the gaps in digitization produced by non-editable documents, as well as contributing to finance, health care, machine translation, digital libraries, and a variety of other fields. In addition, as a result of the pandemic, the amount of digital information in the education sector has increased, necessitating the deployment of text recognition systems to deal with it. Text Recognition systems worked on three different categories of text: (a) Machine Printed, (b) Offline Handwritten, and (c) Online Handwritten Texts. The major goal of this research is to examine the process of typewritten text recognition systems. The availability of historical documents and other traditional materials in many types of texts is another major challenge for convergence. Despite the fact that this research examines a variety of languages, the Gurmukhi language receives the most focus. This paper shows an analysis of all prior text recognition algorithms for the Gurmukhi language. In addition, work on degraded texts in various languages is evaluated based on accuracy and F-measure

    Clearing the transcription hurdle in dialect corpus building : the corpus of Southern Dutch dialects as case-study

    Get PDF
    This paper discusses how the transcription hurdle in dialect corpus building can be cleared. While corpus analysis has strongly gained in popularity in linguistic research, dialect corpora are still relatively scarce. This scarcity can be attributed to several factors, one of which is the challenging nature of transcribing dialects, given a lack of both orthographic norms for many dialects and speech technological tools trained on dialect data. This paper addresses the questions (i) how dialects can be transcribed efficiently and (ii) whether speech technological tools can lighten the transcription work. These questions are tackled using the Southern Dutch dialects (SDDs) as case study, for which the usefulness of automatic speech recognition (ASR), respeaking, and forced alignment is considered. Tests with these tools indicate that dialects still constitute a major speech technological challenge. In the case of the SDDs, the decision was made to use speech technology only for the word-level segmentation of the audio files, as the transcription itself could not be sped up by ASR tools. The discussion does however indicate that the usefulness of ASR and other related tools for a dialect corpus project is strongly determined by the sound quality of the dialect recordings, the availability of statistical dialect-specific models, the degree of linguistic differentiation between the dialects and the standard language, and the goals the transcripts have to serve

    Patterns in the daily diary of the 41st president, George Bush

    Get PDF
    This thesis explores interfaces for locating and comprehending patterns among time-based materials in digital libraries. Time-based digital library materials are like other digital library materials in that they are comprised of data and metadata. In addition, they have a time or period of time attached to each data item. The specific focus of this thesis is on fine-granularity items-items that have relatively little data and cover brief periods of time. In such a context, people often are left to discern patterns of activity by retrospectively making sense of the collection or parts thereof. The specific domain chosen for the implementation is the daily diary of President George Bush, the 41st president of the USA. This project developed a searching and browsing interface, which allows people to study the relationship between activities and people in the library data. As part of this thesis, a corpus of the Presidential daily diary was digitized. Two interfaces were provided to this corpus, one based on a standard information retrieval engine (Greenstone) and another presenting time-based visualizations of data items. An evaluation was conducted to explore the relative strengths and weaknesses of these two interfaces

    Special Libraries, December 1977

    Get PDF
    Volume 68, Issue 12https://scholarworks.sjsu.edu/sla_sl_1977/1008/thumbnail.jp

    Interactive Transcription of Old Text Documents

    Full text link
    Nowadays, there are huge collections of handwritten text documents in libraries all over the world. The high demand for these resources has led to the creation of digital libraries in order to facilitate the preservation and provide electronic access to these documents. However text transcription of these documents im- ages are not always available to allow users to quickly search information, or computers to process the information, search patterns or draw out statistics. The problem is that manual transcription of these documents is an expensive task from both economical and time viewpoints. This thesis presents a novel ap- proach for e cient Computer Assisted Transcription (CAT) of handwritten text documents using state-of-the-art Handwriting Text Recognition (HTR) systems. The objective of CAT approaches is to e ciently complete a transcription task through human-machine collaboration, as the e ort required to generate a manual transcription is high, and automatically generated transcriptions from state-of-the-art systems still do not reach the accuracy required. This thesis is centered on a special application of CAT, that is, the transcription of old text document when the quantity of user e ort available is limited, and thus, the entire document cannot be revised. In this approach, the objective is to generate the best possible transcription by means of the user e ort available. This thesis provides a comprehensive view of the CAT process from feature extraction to user interaction. First, a statistical approach to generalise interactive transcription is pro- posed. As its direct application is unfeasible, some assumptions are made to apply it to two di erent tasks. First, on the interactive transcription of hand- written text documents, and next, on the interactive detection of the document layout. Next, the digitisation and annotation process of two real old text documents is described. This process was carried out because of the scarcity of similar resources and the need of annotated data to thoroughly test all the developed tools and techniques in this thesis. These two documents were carefully selected to represent the general di culties that are encountered when dealing with HTR. Baseline results are presented on these two documents to settle down a benchmark with a standard HTR system. Finally, these annotated documents were made freely available to the community. It must be noted that, all the techniques and methods developed in this thesis have been assessed on these two real old text documents. Then, a CAT approach for HTR when user e ort is limited is studied and extensively tested. The ultimate goal of applying CAT is achieved by putting together three processes. Given a recognised transcription from an HTR system. The rst process consists in locating (possibly) incorrect words and employs the user e ort available to supervise them (if necessary). As most words are not expected to be supervised due to the limited user e ort available, only a few are selected to be revised. The system presents to the user a small subset of these words according to an estimation of their correctness, or to be more precise, according to their con dence level. Next, the second process starts once these low con dence words have been supervised. This process updates the recogni- tion of the document taking user corrections into consideration, which improves the quality of those words that were not revised by the user. Finally, the last process adapts the system from the partially revised (and possibly not perfect) transcription obtained so far. In this adaptation, the system intelligently selects the correct words of the transcription. As results, the adapted system will bet- ter recognise future transcriptions. Transcription experiments using this CAT approach show that this approach is mostly e ective when user e ort is low. The last contribution of this thesis is a method for balancing the nal tran- scription quality and the supervision e ort applied using our previously de- scribed CAT approach. In other words, this method allows the user to control the amount of errors in the transcriptions obtained from a CAT approach. The motivation of this method is to let users decide on the nal quality of the desired documents, as partially erroneous transcriptions can be su cient to convey the meaning, and the user e ort required to transcribe them might be signi cantly lower when compared to obtaining a totally manual transcription. Consequently, the system estimates the minimum user e ort required to reach the amount of error de ned by the user. Error estimation is performed by computing sepa- rately the error produced by each recognised word, and thus, asking the user to only revise the ones in which most errors occur. Additionally, an interactive prototype is presented, which integrates most of the interactive techniques presented in this thesis. This prototype has been developed to be used by palaeographic expert, who do not have any background in HTR technologies. After a slight ne tuning by a HTR expert, the prototype lets the transcribers to manually annotate the document or employ the CAT ap- proach presented. All automatic operations, such as recognition, are performed in background, detaching the transcriber from the details of the system. The prototype was assessed by an expert transcriber and showed to be adequate and e cient for its purpose. The prototype is freely available under a GNU Public Licence (GPL).Serrano Martínez-Santos, N. (2014). Interactive Transcription of Old Text Documents [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/37979TESI

    #MPLP: a Comparison of Domain Novice and Expert User-generated Tags in a Minimally Processed Digital Archive

    Get PDF
    The high costs of creating and maintaining digital archives precluded many archives from providing users with digital content or increasing the amount of digitized materials. Studies have shown users increasingly demand immediate online access to archival materials with detailed descriptions (access points). The adoption of minimal processing to digital archives limits the access points at the folder or series level rather than the item-level description users\u27 desire. User-generated content such as tags, could supplement the minimally processed metadata, though users are reluctant to trust or use unmediated tags. This dissertation project explores the potential for controlling/mediating the supplemental metadata from user-generated tags through inclusion of only expert domain user-generated tags. The study was designed to answer three research questions with two parts each: 1(a) What are the similarities and differences between tags generated by expert and novice users in a minimally processed digital archive?, 1(b) Are there differences between expert and novice users\u27 opinions of the tagging experience and tag creation considerations?, 2(a) In what ways do tags generated by expert and/or novice users in a minimally processed collection correspond with metadata in a traditionally processed digital archive?, 2(b) Does user knowledge affect the proportion of tags matching unselected metadata in a minimally processed digital archive?, 3(a) In what ways do tags generated by expert and/or novice users in a minimally processed collection correspond with existing users\u27 search terms in a digital archive?, and 3(b) Does user knowledge affect the proportion of tags matching query terms in a minimally processed digital archive? The dissertation project was a mixed-methods, quasi-experimental design focused on tag generation within a sample minimally processed digital archive. The study used a sample collection of fifteen documents and fifteen photographs. Sixty participants divided into two groups (novices and experts) based on assessed prior knowledge of the sample collection\u27s domain generated tags for fifteen documents and fifteen photographs (a minimum of one tag per object). Participants completed a pre-questionnaire identifying prior knowledge, and use of social tagging and archives. Additionally, participants provided their opinions regarding factors associated with tagging including the tagging experience and considerations while creating tags through structured and open-ended questions in a post-questionnaire. An open-coding analysis of the created tags developed a coding scheme of six major categories and six subcategories. Application of the coding scheme categorized all generated tags. Additional descriptive statistics summarized the number of tags created by each domain group (expert, novice) for all objects and divided by format (photograph, document). T-tests and Chi-square tests explored the associations (and associative strengths) between domain knowledge and the number of tags created or types of tags created for all objects and divided by format. The subsequent analysis compared the tags with the metadata from the existing collection not displayed within the sample collection participants used. Descriptive statistics summarized the proportion of tags matching unselected metadata and Chi-square tests analyzed the findings for associations with domain knowledge. Finally, the author extracted existing users\u27 query terms from one month of server-log data and compared the generated-tags and unselected metadata. Descriptive statistics summarized the proportion of tags and unselected metadata matching query terms, and Chi-square tests analyzed the findings for associations with domain knowledge. Based on the findings, the author discussed the theoretical and practical implications of including social tags within a minimally processed digital archive

    Uncovering ED: A Qualitative Analysis of Personal Blogs Managed by Individuals with Eating Disorders

    Get PDF
    Previous studies have investigated the potential harmful effects of pro-eating disorder (ED) websites. Websites, such as personal blogs, may contain eating disorder content that may hold important information as well and must be considered. Fifteen blogs hosted by the site “Tumblr” were qualitatively analyzed. Each blog owner was anonymous and all were female. Ten main themes were extracted using grounded theory: interaction, negative self-worth, mind and body disturbances, pictures, eating disorders, suicide, diet, exercise, stats, and recovery. Additional themes also appeared in the study. Results indicate that although each individual blog is unique to its owner, common concepts existed among the majority. The implications for the information in the ED blogs and directions for future research are discussed

    The Transformation of Historical Research in the Digital Age

    Get PDF
    Almost all aspects of the historian's research workflow have been transformed by digital technology. The Transformation of Historical Research in the Digital Age equips historians to be self-conscious practitioners by making these shifts explicit and exploring their long-term impact. This title is also available as Open Access on Cambridge Core

    Making large art historical photo archives searchable

    Get PDF
    In recent years, museums, archives and other cultural institutions have initiated important programs to digitize their collections. Millions of artefacts (paintings, engravings, drawings, ancient photographs) are now represented in digital photographic format. Furthermore, through progress in standardization, a growing portion of these images are now available online, in an easily accessible manner. This thesis studies how such large-scale art history collection can be made searchable using new deep learning approaches for processing and comparing images. It takes as a case study the processing of the photo archive of the Foundation Giorgio Cini, where more than 300'000 images have been digitized. We demonstrate how a generic processing pipeline can reliably extract the visual and textual content of scanned images, opening up ways to efficiently digitize large photo-collections. Then, by leveraging an annotated graph of visual connections, a metric is learnt that allows clustering and searching through artwork reproductions independently of their medium, effectively solving a difficult problem of cross-domain image search. Finally, the thesis studies how a complex Web Interface allows users to perform different searches based on this metric. We also evaluate the process by which users can annotate elements of interest during their navigation to be added to the database, allowing the system to be trained further and give better results. By documenting a complete approach on how to go from a physical photo-archive to a state-of-the-art navigation system, this thesis paves the way for a global search engine across the world's photo archives

    Students´ language in computer-assisted tutoring of mathematical proofs

    Get PDF
    Truth and proof are central to mathematics. Proving (or disproving) seemingly simple statements often turns out to be one of the hardest mathematical tasks. Yet, doing proofs is rarely taught in the classroom. Studies on cognitive difficulties in learning to do proofs have shown that pupils and students not only often do not understand or cannot apply basic formal reasoning techniques and do not know how to use formal mathematical language, but, at a far more fundamental level, they also do not understand what it means to prove a statement or even do not see the purpose of proof at all. Since insight into the importance of proof and doing proofs as such cannot be learnt other than by practice, learning support through individualised tutoring is in demand. This volume presents a part of an interdisciplinary project, set at the intersection of pedagogical science, artificial intelligence, and (computational) linguistics, which investigated issues involved in provisioning computer-based tutoring of mathematical proofs through dialogue in natural language. The ultimate goal in this context, addressing the above-mentioned need for learning support, is to build intelligent automated tutoring systems for mathematical proofs. The research presented here has been focused on the language that students use while interacting with such a system: its linguistic propeties and computational modelling. Contribution is made at three levels: first, an analysis of language phenomena found in students´ input to a (simulated) proof tutoring system is conducted and the variety of students´ verbalisations is quantitatively assessed, second, a general computational processing strategy for informal mathematical language and methods of modelling prominent language phenomena are proposed, and third, the prospects for natural language as an input modality for proof tutoring systems is evaluated based on collected corpora
    • …
    corecore