165,914 research outputs found

    Adaptive Algorithms for Automated Processing of Document Images

    Get PDF
    Large scale document digitization projects continue to motivate interesting document understanding technologies such as script and language identification, page classification, segmentation and enhancement. Typically, however, solutions are still limited to narrow domains or regular formats such as books, forms, articles or letters and operate best on clean documents scanned in a controlled environment. More general collections of heterogeneous documents challenge the basic assumptions of state-of-the-art technology regarding quality, script, content and layout. Our work explores the use of adaptive algorithms for the automated analysis of noisy and complex document collections. We first propose, implement and evaluate an adaptive clutter detection and removal technique for complex binary documents. Our distance transform based technique aims to remove irregular and independent unwanted foreground content while leaving text content untouched. The novelty of this approach is in its determination of best approximation to clutter-content boundary with text like structures. Second, we describe a page segmentation technique called Voronoi++ for complex layouts which builds upon the state-of-the-art method proposed by Kise [Kise1999]. Our approach does not assume structured text zones and is designed to handle multi-lingual text in both handwritten and printed form. Voronoi++ is a dynamically adaptive and contextually aware approach that considers components' separation features combined with Docstrum [O'Gorman1993] based angular and neighborhood features to form provisional zone hypotheses. These provisional zones are then verified based on the context built from local separation and high-level content features. Finally, our research proposes a generic model to segment and to recognize characters for any complex syllabic or non-syllabic script, using font-models. This concept is based on the fact that font files contain all the information necessary to render text and thus a model for how to decompose them. Instead of script-specific routines, this work is a step towards a generic character and recognition scheme for both Latin and non-Latin scripts

    Travel report Mauritania bivalve Molluscs october 2008

    Get PDF
    During the last four years Mauritania has been working on the completion of a Food Safety Program of Bivalve Mollusks, in order to obtain an export approval by the Europe Union. During the preparations for an inspection by the FVO (Food and Veterinary Office) no fisheries or production activities for Bivalve Mollusks occurred in Mauritania. The aim of the mission was to perform a pre-assessment on the status of the Bivalve Food Safety program in Mauritania. The pre-inspection was carried out in order to prepare Mauritania for the upcoming Food and Veterinary Office (FVO) inspection, planned for the second semester of 2008 or the beginning of 2009

    Exploiting zoning based on approximating splines in cursive script recognition

    Get PDF
    Because of its complexity, handwriting recognition has to exploit many sources of information to be successful, e.g. the handwriting zones. Variability of zone-lines, however, requires a more flexible representation than traditional horizontal or linear methods. The proposed method therefore employs approximating cubic splines. Using entire lines of text rather than individual words is shown to improve the zoning accuracy, especially for short words. The new method represents an improvement over existing methods in terms of range of applicability, zone-line precision and zoning-classification accuracy. Application to several problems of handwriting recognition is demonstrated and evaluated

    A Survey of Location Prediction on Twitter

    Full text link
    Locations, e.g., countries, states, cities, and point-of-interests, are central to news, emergency events, and people's daily lives. Automatic identification of locations associated with or mentioned in documents has been explored for decades. As one of the most popular online social network platforms, Twitter has attracted a large number of users who send millions of tweets on daily basis. Due to the world-wide coverage of its users and real-time freshness of tweets, location prediction on Twitter has gained significant attention in recent years. Research efforts are spent on dealing with new challenges and opportunities brought by the noisy, short, and context-rich nature of tweets. In this survey, we aim at offering an overall picture of location prediction on Twitter. Specifically, we concentrate on the prediction of user home locations, tweet locations, and mentioned locations. We first define the three tasks and review the evaluation metrics. By summarizing Twitter network, tweet content, and tweet context as potential inputs, we then structurally highlight how the problems depend on these inputs. Each dependency is illustrated by a comprehensive review of the corresponding strategies adopted in state-of-the-art approaches. In addition, we also briefly review two related problems, i.e., semantic location prediction and point-of-interest recommendation. Finally, we list future research directions.Comment: Accepted to TKDE. 30 pages, 1 figur
    • …
    corecore