11 research outputs found

    Designing a structured lexicon for document image analysis

    Get PDF
    This paper presents a structured, multi-level architecture of a lexicon which is a central component of our knowledge-based document analysis system. Our system has the task to transform incoming business letters into an equivalent electronic representation automatically. Moreover, partial text analysis and understanding of a letter\u27s body and relevant parts are initiated to enrich the conceptual knowledge about the actual document (e.g., by a classification). In such an application domain, a well-designed lexicon has to consider requirements of both, text recognition and text analysis. For that purpose, we propose an appropriate lexicon architecture and the internal structure of corresponding lexical entries being a prerequisite for successful higher-level interpretations of documents

    A Knowledge based segmentation algorithm for enhanced recognition of handwritten courtesy amounts

    Get PDF
    "March 1994."Includes bibliographical references (p. [23]-[24]).Supported by the Productivity From Information Technology (PROFIT) Research Initiative at MIT.Karim Hussein ... [et al.

    An Integrated architecture for recognition of totally unconstrained handwritten numerals

    Get PDF
    Reprint. Reprinted from the International journal of pattern recognition and artificial intelligence. Vol. 7, no. 4 (1993) "January 1993."Includes bibliographical references (p. 127-128).Supported by the Productivity From Information Technology (PROFIT) Research Initiative at MIT.Amar Gupta ... [et al.

    On virtual partitioning of large dictionaries for contextual post-processing to improve character recognition

    Get PDF
    This paper presents a new approach to the partitioning of large dictionaries by virtual views. The basic idea is that additional knowledge sources of text recognition and text analysis are employed for fast dictionary look-up in order to prune search space through static or dynamic views. The heart of the system is a redundant hashing technique which involves a set of hash functions dealing with noisy input efficiently. Currently, the system is composed of two main system components: the dictionary generator and the dictionary controller. While the dictionary generator initially builds the system by using profiles and source dictionaries, the controller allows the flexible integration of different search heuristics. Results prove that our system achieves a respectable speed-up of dictionary access time

    Online Learning Algorithms for License Plate Detection and Recognition Based on Deformable Part Model

    Get PDF
    车牌是交通场景图像中十分重要的目标类,是交通领域智能视频分析的核心对象之一。良好图像质量的车牌检测与识别,已经有了较为成熟的实用算法;但针对各种复杂背景或松视角约束等成像条件下的车牌图像,如何提高并维持车牌检测与识别的性能指标仍然是个值得继续研究的问题。基于在线学习的目标检测与识别,是计算机视觉领域的重要研究方向之一。如何使车牌检测与识别算法在工作的过程中进行在线学习,逐步提升算法的性能指标,并适应成像环境的变化,也是个值得深入研究的课题。本文引入有监督或半监督的在线学习机制,使得车牌检测识别系统在使用过程中性能指标保持在较高的水平。本文还引入了一种近年来比较流行的基于可变部件模型的目标检测方...License plate is a very important target class in the traffic scene images. It is one of the core objects of intelligent video analysis in the transportation. There have been good and practical algorithms for license plate detection and recognition of high quality images, however, in the situation of complex background or loose constraints imaging modalities, how to improve and maintain the perfor...学位:工学硕士院系专业:信息科学与技术学院计算机科学系_计算机应用技术学号:2302009115274

    Shape Matching by Elastic Deformation

    Get PDF
    Coordinated Science Laboratory was formerly known as Control Systems Laborator

    Automatic document classification and extraction system (ADoCES)

    Get PDF
    Document processing is a critical element of office automation. Document image processing begins from the Optical Character Recognition (OCR) phase with complex processing for document classification and extraction. Document classification is a process that classifies an incoming document into a particular predefined document type. Document extraction is a process that extracts information pertinent to the users from the content of a document and assigns the information as the values of the “logical structure” of the document type. Therefore, after document classification and extraction, a paper document will be represented in its digital form instead of its original image file format, which is called a frame instance. A frame instance is an operable and efficient form that can be processed and manipulated during document filing and retrieval. This dissertation describes a system to support a complete procedure, which begins with the scanning of the paper document into the system and ends with the output of an effective digital form of the original document. This is a general-purpose system with “learning” ability and, therefore, it can be adapted easily to many application domains. In this dissertation, the “logical closeness” segmentation method is proposed. A novel representation of document layout structure - Labeled Directed Weighted Graph (LDWG) and a methodology of transforming document segmentation into LDWG representation are described. To find a match between two LDWGs, string representation matching is applied first instead of doing graph comparison directly, which reduces the time necessary to make the comparison. Applying artificial intelligence, the system is able to learn from experiences and build samples of LDWGs to represent each document type. In addition, the concept of frame templates is used for the document logical structure representation. The concept of Document Type Hierarchy (DTH) is also enhanced to express the hierarchical relation over the logical structures existing among the documents

    Document analysis at DFKI. - Part 1: Image analysis and text recognition

    Get PDF
    Document analysis is responsible for an essential progress in office automation. This paper is part of an overview about the combined research efforts in document analysis at the DFKI. Common to all document analysis projects is the global goal of providing a high level electronic representation of documents in terms of iconic, structural, textual, and semantic information. These symbolic document descriptions enable an "intelligent\u27; access to a document database. Currently there are three ongoing document analysis projects at DFKI: INCA, OMEGA, and PASCAL2000/PASCAL+. Though the projects pursue different goals in different application domains, they all share the same problems which have to be resolved with similar techniques. For that reason the activities in these projects are bundled to avoid redundant work. At DFKI we have divided the problem of document analysis into two main tasks, text recognition and text analysis, which themselves are divided into a set of subtasks. In a series of three research reports the work of the document analysis and office automation department at DFKI is presented. The first report discusses the problem of text recognition, the second that of text analysis. In a third report we describe our concept for a specialized document analysis knowledge representation language. The report in hand describes the activities dealing with the text recognition task. Text recognition covers the phase starting with capturing a document image up to identifying the written words. This comprises the following subtasks: preprocessing the pictorial information, segmenting into blocks, lines, words, and characters, classifying characters, and identifying the input words. For each subtask several competing solution algorithms, called specialists or knowledge sources, may exist. To efficiently control and organize these specialists an intelligent situation-based planning component is necessary, which is also described in this report. It should be mentioned that the planning component is also responsible to control the overall document analysis system instead of the text recognition phase onl

    Computergestützte Inhaltsanalyse von digitalen Videoarchiven

    Full text link
    Der Übergang von analogen zu digitalen Videos hat in den letzten Jahren zu großen Veränderungen innerhalb der Filmarchive geführt. Insbesondere durch die Digitalisierung der Filme ergeben sich neue Möglichkeiten für die Archive. Eine Abnutzung oder Alterung der Filmrollen ist ausgeschlossen, so dass die Qualität unverändert erhalten bleibt. Zudem wird ein netzbasierter und somit deutlich einfacherer Zugriff auf die Videos in den Archiven möglich. Zusätzliche Dienste stehen den Archivaren und Anwendern zur Verfügung, die erweiterte Suchmöglichkeiten bereitstellen und die Navigation bei der Wiedergabe erleichtern. Die Suche innerhalb der Videoarchive erfolgt mit Hilfe von Metadaten, die weitere Informationen über die Videos zur Verfügung stellen. Ein großer Teil der Metadaten wird manuell von Archivaren eingegeben, was mit einem großen Zeitaufwand und hohen Kosten verbunden ist. Durch die computergestützte Analyse eines digitalen Videos ist es möglich, den Aufwand bei der Erzeugung von Metadaten für Videoarchive zu reduzieren. Im ersten Teil dieser Dissertation werden neue Verfahren vorgestellt, um wichtige semantische Inhalte der Videos zu erkennen. Insbesondere werden neu entwickelte Algorithmen zur Erkennung von Schnitten, der Analyse der Kamerabewegung, der Segmentierung und Klassifikation von Objekten, der Texterkennung und der Gesichtserkennung vorgestellt. Die automatisch ermittelten semantischen Informationen sind sehr wertvoll, da sie die Arbeit mit digitalen Videoarchiven erleichtern. Die Informationen unterstützen nicht nur die Suche in den Archiven, sondern führen auch zur Entwicklung neuer Anwendungen, die im zweiten Teil der Dissertation vorgestellt werden. Beispielsweise können computergenerierte Zusammenfassungen von Videos erzeugt oder Videos automatisch an die Eigenschaften eines Abspielgerätes angepasst werden. Ein weiterer Schwerpunkt dieser Dissertation liegt in der Analyse historischer Filme. Vier europäische Filmarchive haben eine große Anzahl historischer Videodokumentationen zur Verfügung gestellt, welche Anfang bis Mitte des letzten Jahrhunderts gedreht und in den letzten Jahren digitalisiert wurden. Durch die Lagerung und Abnutzung der Filmrollen über mehrere Jahrzehnte sind viele Videos stark verrauscht und enthalten deutlich sichtbare Bildfehler. Die Bildqualität der historischen Schwarz-Weiß-Filme unterscheidet sich signifikant von der Qualität aktueller Videos, so dass eine verlässliche Analyse mit bestehenden Verfahren häufig nicht möglich ist. Im Rahmen dieser Dissertation werden neue Algorithmen vorgestellt, um eine zuverlässige Erkennung von semantischen Inhalten auch in historischen Videos zu ermöglichen
    corecore