18 research outputs found

    Layout Analysis for Scanned PDF and Transformation to the Structured PDF Suitable for Vocalization and Navigation

    Get PDF
    Information can include text, pictures and signatures that can be scanned into a document format, such as the Portable Document Format (PDF), and easily emailed to recipients around the world. Upon the document’s arrival, the receiver can open and view it using a vast array of different PDF viewing applications such as Adobe Reader and Apple Preview. Hence, today the use of the PDF has become pervasive. Since the scanned PDF is an image format, it is inaccessible to assistive technologies such as a screen reader. Therefore, the retrieval of the information needs Optical Character Recognition (OCR). The OCR software scans the scanned PDF file and through text extraction generates an editable text formatted document. This text document can then be edited, formatted, searched and indexed as well as translated or converted to speech. A problem that the OCR software does not solve is the accurate regeneration of the full text layout. This paper presents a technology that addresses this issue by closely preserving the original textual layout of the scanned PDF using the open source document analysis and OCR system (OCRopus) based on geometric layout and positioning information. The main issues considered in this research are the preservation of the correct reading order, and the representation of common logical structured elements such as section headings, line breaks, paragraphs, captions, and sidebars, foot-bars, running headers, embedded images, graphics, tables and mathematical expressions

    Practical segmentation methods for logical and geometric layout analysis to Improve scanned PDF accessibility to vision impaired

    Get PDF
    The use of electronic documents has rapidly increased in recent decades and the PDF is one the most commonly used electronic document formats. A scanned PDF is an image and does not actually contain any text. For the vision–impaired user who is dependent upon a screen reader to access this information, this format is not useful. Thus addressing PDF accessibility through assistive technology has now become an important concern. PDF layout analysis provides precious formatting information that supports PDF component classification. This classification facilitates the tag generation. Accurate tagging produces a searchable and navigable scanned PDF document. This paper describes several practical segmentation methods which are easy to implement and efficient for PDF layout analysis so that the scanned PDF document can be navigated or searched using assistive technologies

    Deep Learning Methods for Dialogue Act Recognition using Visual Information

    Get PDF
    Rozpoznávání dialogových aktů (DA) je důležitým krokem v řízení a porozumění dialogu. Tato úloha spočívá v automatickém přiřazení třídy k výroku/promluvě (nebo jeho části) na základě jeho funkce v dialogu (např. prohlášení, otázka, potvrzení atd.). Takováto klasifikace pak pomáhá modelovat a identifikovat strukturu spontánních dialogů. I když je rozpoznávání DA obvykle realizováno na zvukovém signálu (řeči) pomocí modelů pro automatické rozpoznávání řeči, dialogy existují rovněž ve formě obrázků (např. komiksy). Tato práce se zabývá automatickým rozpoznáváním dialogových aktů z obrazových dokumentů. Dle nás se jedná o první pokus o navržení přístupu rozpoznávání DA využívající obrázky jako vstup. Pro tento úkol je nutné extrahovat text z obrázků. Využíváme proto algoritmy z oblasti počítačového vidění a~zpracování obrazu, jako je prahování obrazu, segmentace textu a optické rozpoznávání znaků (OCR). Hlavním přínosem v této oblasti je návrh a implementace OCR modelu založeného na konvolučních a rekurentních neuronových sítích. Také prozkoumáváme různé strategie pro trénování tohoto modelu, včetně generování syntetických dat a technik rozšiřování dat (tzv. augmentace). Dosahujeme vynikajících výsledků OCR v případě, kdy je malé množství trénovacích dat. Mezi naše přínosy tedy patří to, jak vytvořit efektivní OCR systém s~minimálními náklady na ruční anotaci. Dále se zabýváme vícejazyčností v oblasti rozpoznávání DA. Úspěšně jsme použili a nasadili obecný model, který byl trénován všemi dostupnými jazyky, a také další modely, které byly trénovány pouze na jednom jazyce, a vícejazyčnosti je dosaženo pomocí transformací sémantického prostoru. Také zkoumáme techniku přenosu učení (tzv. transfer learning) pro tuto úlohu tam, kde je k dispozici malý počet anotovaných dat. Používáme příznaky jak na úrovni slov, tak i vět a naše modely hlubokých neuronových sítí (včetně architektury Transformer) dosáhly výborných výsledků v oblasti vícejazyčného rozpoznávání dialogových aktů. Pro rozpoznávání DA z obrazových dokumentů navrhujeme nový multimodální model založený na konvoluční a rekurentní neuronové síti. Tento model kombinuje textové a obrazové vstupy. Textová část zpracovává text z OCR, zatímco vizuální část extrahuje obrazové příznaky, které tvoří další vstup do modelu. Text z OCR obsahuje často překlepy nebo jiné lexikální chyby. Demonstrujeme na experimentech, že tento multimodální model využívající dva vstupy dokáže částečně vyvážit ztrátu informace způsobenou chybovostí OCR systému.ObhájenoDialogue act (DA) recognition is an important step of dialogue management and understanding. This task is to automatically assign a label to an utterance (or its part) based on its function in a dialogue (e.g. statement, question, backchannel, etc.). Such utterance-level classification thus helps to model and identify the structure of spontaneous dialogues. Even though DA recognition is usually realized on audio data using an automatic speech recognition engine, the dialogues exist also in a form of images (e.g. comic books). This thesis deals with automatic dialogue act recognition from image documents. To the best of our knowledge, this is the first attempt to propose DA recognition approaches using the images as an input. For this task, it is necessary to extract the text from the images. Therefore, we employ algorithms from the field of computer vision and image processing such as image thresholding, text segmentation, and optical character recognition (OCR). The main contribution in this field is to design and implement a custom OCR model based on convolutional and recurrent neural networks. We also explore different strategies for training such a~model, including synthetic data generation and data augmentation techniques. We achieve new state-of-the-art OCR results in the constraints when only a few training data are available. Summing up, our contribution is hence also presenting an overview of how to create an efficient OCR system with minimal costs. We further deal with the multilinguality in the DA recognition field. We successfully employ one general model that was trained by data from all available languages, as well as several models that are trained on a single language, and cross-linguality is achieved by using semantic space transformations. Moreover, we explore transfer learning for DA recognition where there is a small number of annotated data available. We use word-level and utterance-level features and our models contain deep neural network architectures, including Transformers. We obtain new state-of-the-art results in multi- and cross-lingual DA regonition field. For DA recognition from image documents, we propose and implement a novel multimodal model based on convolutional and recurrent neural network. This model combines text and image inputs. A text part is fed by text tokens from OCR, while the visual part extracts image features that are considered as an auxiliary input. Extracted text from dialogues is often erroneous and contains typos or other lexical errors. We show that the multimodal model deals with the erroneous text and visual information partially balance this loss of information

    Adaptive Methods for Robust Document Image Understanding

    Get PDF
    A vast amount of digital document material is continuously being produced as part of major digitization efforts around the world. In this context, generic and efficient automatic solutions for document image understanding represent a stringent necessity. We propose a generic framework for document image understanding systems, usable for practically any document types available in digital form. Following the introduced workflow, we shift our attention to each of the following processing stages in turn: quality assurance, image enhancement, color reduction and binarization, skew and orientation detection, page segmentation and logical layout analysis. We review the state of the art in each area, identify current defficiencies, point out promising directions and give specific guidelines for future investigation. We address some of the identified issues by means of novel algorithmic solutions putting special focus on generality, computational efficiency and the exploitation of all available sources of information. More specifically, we introduce the following original methods: a fully automatic detection of color reference targets in digitized material, accurate foreground extraction from color historical documents, font enhancement for hot metal typesetted prints, a theoretically optimal solution for the document binarization problem from both computational complexity- and threshold selection point of view, a layout-independent skew and orientation detection, a robust and versatile page segmentation method, a semi-automatic front page detection algorithm and a complete framework for article segmentation in periodical publications. The proposed methods are experimentally evaluated on large datasets consisting of real-life heterogeneous document scans. The obtained results show that a document understanding system combining these modules is able to robustly process a wide variety of documents with good overall accuracy

    Non-Visual Representation of Complex Documents for Use in Digital Talking Books

    Get PDF
    Essential written information such as text books, bills, and catalogues needs to be accessible by everyone. However, access is not always available to vision-impaired people. As they require electronic documents to be available in specific formats. In order to address the accessibility issues of electronic documents, this research aims to design an affordable, portable, standalone and simple to use complete reading system that will convert and describe complex components in electronic documents to print disabled users

    A deep learning framework for contingent liabilities risk management : predicting Brazilian labor court decisions

    Get PDF
    Estimar o resultado de um processo em litígio é crucial para muitas organizações. Uma aplicação específica são os "Passivos Contingenciais", que se referem a passivos que podem ou não ocorrer dependendo do resultado de um processo judicial em litígio. A metodologia tradicional para estimar essa probabilidade baseia-se na opinião de um advogado quem determina a possibilidade de um processo judicial ser perdido a partir de uma avaliação quantitativa. Esta tese apresenta a um modelo matemático baseado numa arquitetura de Deep Learning cujo objetivo é estimar a probabilidade de ganho ou perda de um processo de litígio, principalmente para ser utilizada na estimação de Passivos Contingenciais. A arquitetura, diferentemente do método tradicional, oferece um maior grau de confiança ao prever o resultado de um processo legal em termos de probabilidade e com um tempo de processamento de segundos. Além do resultado primário, a arquitetura estima uma amostra dos casos mais semelhantes ao processo estimado, que servem de apoio para a realização de estratégias de litígio. Nossa arquitetura foi testada em duas bases de dados de processos legais: (1) o Tribunal Europeu de Direitos Humanos (ECHR) e (2) o 4º Tribunal Regional do Trabalho brasileiro (4TRT). Ela estimou de acordo com nosso conhecimento, o melhor desempenho já publicado (precisão = 0,906) na base de dados da ECHR, uma coleção amplamente utilizada de processos legais, e é o primeiro trabalho a aplicar essa metodologia em um tribunal de trabalho brasileiro. Os resultados mostram que a arquitetura é uma alternativa adequada a ser utilizada contra o método tradicional de estimação do desfecho de um processo em litígio realizado por advogados. Finalmente, validamos nossos resultados com especialistas que confirmaram as possibilidades promissoras da arquitetura. Assim, nos incentivamos os académicos a continuar desenvolvendo pesquisas sobre modelagem matemática na área jurídica, pois é um tema emergente com um futuro promissor e aos usuários a utilizar ferramentas baseadas como a desenvolvida em nosso trabalho, pois fornecem vantagens substanciais em termos de precisão e velocidade sobre os métodos convencionais.Estimating the likely outcome of a litigation process is crucial for many organizations. A specific application is the “Contingents Liabilities,” which refers to liabilities that may or may not occur depending on the result of a pending litigation process (lawsuit). The traditional methodology for estimating this likelihood is based on the opinion from the lawyer’s experience which is based on a qualitative appreciation. This dissertation presents a mathematical modeling framework based on a Deep Learning architecture that estimates the probability outcome of a litigation process (accepted & not accepted) with a particular use on Contingent Liabilities. The framework offers a degree of confidence by describing how likely an event will occur in terms of probability and provides results in seconds. Besides the primary outcome, it offers a sample of the most similar cases to the estimated lawsuit that serve as support to perform litigation strategies. We tested our framework in two litigation process databases from: (1) the European Court of Human Rights (ECHR) and (2) the Brazilian 4th regional labor court. Our framework achieved to our knowledge the best-published performance (precision = 0.906) on the ECHR database, a widely used collection of litigation processes, and it is the first to be applied in a Brazilian labor court. Results show that the framework is a suitable alternative to be used against the traditional method of estimating the verdict outcome from a pending litigation performed by lawyers. Finally, we validated our results with experts who confirmed the promising possibilities of the framework. We encourage academics to continue developing research on mathematical modeling in the legal area as it is an emerging topic with a promising future and practitioners to use tools based as the proposed, as they provides substantial advantages in terms of accuracy and speed over conventional methods

    Non-visual representation of complex documents for use in digital talking books

    Get PDF
    According to a World Intellectual Property Organization (WIPO) estimation, only 5% of the world's one million print titles that are published every year are accessible to the approximately 340 million blind, visually impaired or print disabled people. Equal access to information is a basic right of all people. Essen- tial information such as flyers, brochures, event calendars, programs, catalogues and booking information needs to be accessible by everyone. Information helps people to make decisions, be involved in society and live independent lives. Ar- ticle 21, Section 4.2. of the United Nation's Convention on the rights of people with disabilities advocates the right of blind and partially sighted people to take control of their own lives. However, this entitlement is not always available to them without access to information. Today, electronic documents have become pervasive. For vision-impaired people electronic documents need to be available in specific formats to be accessible. If these formats are not made available, vision-impaired people are greatly disadvantaged when compared to the general population. Therefore, addressing electronic document accessibility for them is an extremely important concern. In order to address the accessibility issues of electronic documents, this research aims to design an affordable, portable, stand-alone and simple to use "Complete Reading System" to provide accessible electronic documents to vision impaired

    “The Bard meets the Doctor” – Computergestützte Identifikation intertextueller Shakespearebezüge in der Science Fiction-Serie Dr. Who.

    Get PDF
    A single abstract from the DHd-2019 Book of Abstracts.Sofern eine editorische Arbeit an dieser Publikation stattgefunden hat, dann bestand diese aus der Eliminierung von Bindestrichen in Überschriften, die aufgrund fehlerhafter Silbentrennung entstanden sind, der Vereinheitlichung von Namen der Autor*innen in das Schema "Nachname, Vorname" und/oder der Trennung von Überschrift und Unterüberschrift durch die Setzung eines Punktes, sofern notwendig

    Digital Classical Philology

    Get PDF
    The buzzwords “Information Society” and “Age of Access” suggest that information is now universally accessible without any form of hindrance. Indeed, the German constitution calls for all citizens to have open access to information. Yet in reality, there are multifarious hurdles to information access – whether physical, economic, intellectual, linguistic, political, or technical. Thus, while new methods and practices for making information accessible arise on a daily basis, we are nevertheless confronted by limitations to information access in various domains. This new book series assembles academics and professionals in various fields in order to illuminate the various dimensions of information's inaccessability. While the series discusses principles and techniques for transcending the hurdles to information access, it also addresses necessary boundaries to accessability.This book describes the state of the art of digital philology with a focus on ancient Greek and Latin. It addresses problems such as accessibility of information about Greek and Latin sources, data entry, collection and analysis of Classical texts and describes the fundamental role of libraries in building digital catalogs and developing machine-readable citation systems
    corecore