48,321 research outputs found

    An Experimental Digital Library Platform - A Demonstrator Prototype for the DigLib Project at SICS

    Get PDF
    Within the framework of the Digital Library project at SICS, this thesis describes the implementation of a demonstrator prototype of a digital library (DigLib); an experimental platform integrating several functions in one common interface. It includes descriptions of the structure and formats of the digital library collection, the tailoring of the search engine Dienst, the construction of a keyword extraction tool, and the design and development of the interface. The platform was realised through sicsDAIS, an agent interaction and presentation system, and is to be used for testing and evaluating various tools for information seeking. The platform supports various user interaction strategies by providing: search in bibliographic records (Dienst); an index of keywords (the Keyword Extraction Function (KEF)); and browsing through the hierarchical structure of the collection. KEF was developed for this thesis work, and extracts and presents keywords from Swedish documents. Although based on a comparatively simple algorithm, KEF contributes by supplying a long-felt want in the area of Information Retrieval. Evaluations of the tasks and the interface still remain to be done, but the digital library is very much up and running. By implementing the platform through sicsDAIS, DigLib can deploy additional tools and search engines without interfering with already running modules. If wanted, agents providing other services than SICS can supply, can be plugged in

    Applying semantic web technologies to knowledge sharing in aerospace engineering

    Get PDF
    This paper details an integrated methodology to optimise Knowledge reuse and sharing, illustrated with a use case in the aeronautics domain. It uses Ontologies as a central modelling strategy for the Capture of Knowledge from legacy docu-ments via automated means, or directly in systems interfacing with Knowledge workers, via user-defined, web-based forms. The domain ontologies used for Knowledge Capture also guide the retrieval of the Knowledge extracted from the data using a Semantic Search System that provides support for multiple modalities during search. This approach has been applied and evaluated successfully within the aerospace domain, and is currently being extended for use in other domains on an increasingly large scale

    Book selection behavior in the physical library: implications for ebook collections

    Get PDF
    Little is known about how readers select books, whether they be print books or ebooks. In this paper we present a study of how people select physical books from academic library shelves. We use the insights gained into book selection behavior to make suggestions for the design of ebook-based digital libraries in order to better facilitate book selection behavior

    Evaluation of the Food and Nutrition Guidelines series

    Get PDF
    The Food and Nutrition Guidelines Series includes five population-specific documents that provide evidence based technical information on food, nutrition and physical activity for health practitioners. This report describes the independent evaluation of the Guidelines undertaken in 2011. The evaluation mainly involved key informant and stakeholder interviews and an electronic survey of health practitioners who are the target audience for the Guidelines. The evaluation findings indicate that the Guidelines are valued highly by the broad range of health practitioners who use them and are seen by many as essential to safe practice for all practitioners who provide advice or education in nutrition. Evaluation participants were unanimous in their view that the Guidelines need to be retained, albeit in a form that is more accessible to a wider range of health practitioners and others and updated more frequently

    A Study of Techniques and Challenges in Text Recognition Systems

    Get PDF
    The core system for Natural Language Processing (NLP) and digitalization is Text Recognition. These systems are critical in bridging the gaps in digitization produced by non-editable documents, as well as contributing to finance, health care, machine translation, digital libraries, and a variety of other fields. In addition, as a result of the pandemic, the amount of digital information in the education sector has increased, necessitating the deployment of text recognition systems to deal with it. Text Recognition systems worked on three different categories of text: (a) Machine Printed, (b) Offline Handwritten, and (c) Online Handwritten Texts. The major goal of this research is to examine the process of typewritten text recognition systems. The availability of historical documents and other traditional materials in many types of texts is another major challenge for convergence. Despite the fact that this research examines a variety of languages, the Gurmukhi language receives the most focus. This paper shows an analysis of all prior text recognition algorithms for the Gurmukhi language. In addition, work on degraded texts in various languages is evaluated based on accuracy and F-measure

    Information Preserving Processing of Noisy Handwritten Document Images

    Get PDF
    Many pre-processing techniques that normalize artifacts and clean noise induce anomalies due to discretization of the document image. Important information that could be used at later stages may be lost. A proposed composite-model framework takes into account pre-printed information, user-added data, and digitization characteristics. Its benefits are demonstrated by experiments with statistically significant results. Separating pre-printed ruling lines from user-added handwriting shows how ruling lines impact people\u27s handwriting and how they can be exploited for identifying writers. Ruling line detection based on multi-line linear regression reduces the mean error of counting them from 0.10 to 0.03, 6.70 to 0.06, and 0.13 to 0.02, com- pared to an HMM-based approach on three standard test datasets, thereby reducing human correction time by 50%, 83%, and 72% on average. On 61 page images from 16 rule-form templates, the precision and recall of form cell recognition are increased by 2.7% and 3.7%, compared to a cross-matrix approach. Compensating for and exploiting ruling lines during feature extraction rather than pre-processing raises the writer identification accuracy from 61.2% to 67.7% on a 61-writer noisy Arabic dataset. Similarly, counteracting page-wise skew by subtracting it or transforming contours in a continuous coordinate system during feature extraction improves the writer identification accuracy. An implementation study of contour-hinge features reveals that utilizing the full probabilistic probability distribution function matrix improves the writer identification accuracy from 74.9% to 79.5%

    Reverse-engineering of architectural buildings based on an hybrid modeling approach

    Get PDF
    We thank MENSI and REALVIZ companies for their helpful comments and the following people for providing us images from their works: Francesca De Domenico (Fig. 1), Kyung-Tae Kim (Fig. 9). The CMN (French national center of patrimony buildings) is also acknowledged for the opportunity given to demonstrate our approach on the Hotel de Sully in Paris. We thank Tudor Driscu for his help on the English translation.This article presents a set of theoretical reflections and technical demonstrations that constitute a new methodological base for the architectural surveying and representation using computer graphics techniques. The problem we treated relates to three distinct concerns: the surveying of architectural objects, the construction and the semantic enrichment of their geometrical models, and their handling for the extraction of dimensional information. A hybrid approach to 3D reconstruction is described. This new approach combines range-based modeling and image-based modeling techniques; it integrates the concept of architectural feature-based modeling. To develop this concept set up a first process of extraction and formalization of architectural knowledge based on the analysis of architectural treaties is carried on. Then, the identified features are used to produce a template shape library. Finally the problem of the overall model structure and organization is addressed

    Digital Image Access & Retrieval

    Get PDF
    The 33th Annual Clinic on Library Applications of Data Processing, held at the University of Illinois at Urbana-Champaign in March of 1996, addressed the theme of "Digital Image Access & Retrieval." The papers from this conference cover a wide range of topics concerning digital imaging technology for visual resource collections. Papers covered three general areas: (1) systems, planning, and implementation; (2) automatic and semi-automatic indexing; and (3) preservation with the bulk of the conference focusing on indexing and retrieval.published or submitted for publicatio

    Multimedia information technology and the annotation of video

    Get PDF
    The state of the art in multimedia information technology has not progressed to the point where a single solution is available to meet all reasonable needs of documentalists and users of video archives. In general, we do not have an optimistic view of the usability of new technology in this domain, but digitization and digital power can be expected to cause a small revolution in the area of video archiving. The volume of data leads to two views of the future: on the pessimistic side, overload of data will cause lack of annotation capacity, and on the optimistic side, there will be enough data from which to learn selected concepts that can be deployed to support automatic annotation. At the threshold of this interesting era, we make an attempt to describe the state of the art in technology. We sample the progress in text, sound, and image processing, as well as in machine learning

    Extraction of Scores and Average From Algerian High-School Degree Transcripts

    Get PDF
    A system for extracting scores and average from Algerian High School Degree Transcripts is proposed. The system extracts the scores and the average based on the localization of the tables gathering this information and it consists of several stages. After preprocessing, the system locates the tables using ruling-lines information as well as other text information. Therefore, the adopted localization approach can work even in the absence of certain ruling-lines or the erasure and discontinuity of lines. After that, the localized tables are segmented into columns and the columns into information cells. Finally, cells labeling is done based on the prior knowledge of the tables structure allowing to identify the scores and the average. Experiments have been conducted on a local dataset in order to evaluate the performances of our system and compare it with three public systems at three levels, and the obtained results show the effectiveness of our system
    • …
    corecore