82,657 research outputs found

    KIIT Digital Library: An open hypermedia Application

    No full text
    The massive use of Web technologies has spurred a new revolution in information storing and retrieving. It has always been an issue whether to incorporate hyperlinks embedded in a document or to store them separately in a link base. Research effort has been concentrated on the development of link services that enable hypermedia functionality to be integrate into the general computing environment and allow linking from all tools on the browser or desktop. KIIT digital library is such an application that focuses mainly on architecture and protocols of Open Hypermedia Systems (OHS), providing on-line document authoring, browsing, cataloguing, searching and updating features. The WWW needs fundamentally new frameworks and concepts to support new search and indexing functionality. This is because of the frequent use of digital archives and to maintain huge amount of database and documents. These digital materials range from electronic versions of books and journals offered by traditional publishers to manuscripts, photographs, maps, sound recordings and similar materials digitized from libraries' own special collections to new electronic scholarly and scientific databases developed through the collaboration of researchers, computer and information scientists, and librarians. Metadata in catalogue systems are an indispensable tool to find information and services in networks. Technological advances provide new opportunities to facilitate the process of collecting and maintaining metadata and to facilitate using catalogue systems. The overall objective is how to make best use of catalogue systems. Information systems such as the World Wide Web, Digital Libraries, inventories of satellite images and other repositories contain more data than ever before, are globally distributed, easy to use and, therefore, become accessible to huge, heterogeneous user groups. For KIIT Digital Library, we have used Resource Development Framework (RDF) and Dublin Core (DC) standards to incorporate metadata. Overall KIIT digital library provides electronic access to information in many different forms. Recent technological advances make the storage and transmission of digital information possible. This project is to design and implement a cataloguing system of the digital library system suitable for storage, indexing, and retrieving information and providing that information across the Internet. The goal is to allow users to quickly search indices to locate segments of interests and view and manipulate these segments on their remote computers

    Integrating digital document acquisition into a university library : A case study of social and organizational challenges

    Get PDF
    In this article we report on the effort of the university library of the Vienna University of Economics and Business Administration to integrate a digital library component for research documents authored at the university into the existing library infrastructure. Setting up a digital library has become a relatively easy task using the current data base technology and the components and tools freely available. However, to integrate such a digital library into existing library systems and to adapt existing document acquisition work-flows in the organization are non-trivial tasks. We use a research frame work to identify the key players in this change process and to analyze their incentive structures. Then we describe the light-weight integration approach employed by our university and show how it provides incentives to the key players and at the same time requires only minimal adaptation of the organization in terms of changing existing work-flows. Our experience suggests that this light-weight integration offers a cost efficient and low risk intermediate step towards switching to exclusive digital document acquisition

    Accelerating statistical texture analysis with an FPGA-DSP hybrid architecture

    Get PDF
    Nowadays, most image processing systems are implemented using either MMX-optimized software libraries or, when time requirements are limited, expensive high performance DSP-based boards. In this paper we present a texture analysis co-processor concept that permits the efficient hardware implementation of statistical feature extraction, and hardware-software codesign to achieve high-performance low-cost solutions. We propose a hybrid architecture based on FPGA chips, for massive data processing, and digital signal processor (DSP) for floating-point computations. In our preliminary trials with test images, we achieved sufficient performance improvements to handle a wide range of real-time applications

    Indexing, browsing and searching of digital video

    Get PDF
    Video is a communications medium that normally brings together moving pictures with a synchronised audio track into a discrete piece or pieces of information. The size of a “piece ” of video can variously be referred to as a frame, a shot, a scene, a clip, a programme or an episode, and these are distinguished by their lengths and by their composition. We shall return to the definition of each of these in section 4 this chapter. In modern society, video is ver

    Goodbye to all that: Disintermediation, disruption and the diminishing library

    Get PDF
    The librarian’s role in collection development is being eroded through disintermediation. A number of factors are contributing to this: • With the Big Deals for e-journals power has shifted considerably in the publishers’ favour, and libraries’ freedom to make collection development decisions has been curtailed. If the trend towards national deals and block payments, seen for instance in the Scottish Higher Education Digital Library (SHEDL), continues, this freedom will be eroded even more; acquisitions decisions are increasingly made at the level of publisher rather than title. • A notable response to the power of the publishers’ monopoly is the open access movement, which aims to make scholarly literature freely available to all. One route is through open access publishing, where typically the author, or their institution or research funder, pays the cost of peer review and publishing. The other route is the deposit of pre- or post-prints of traditionally published materials in the author’s institutional repository or in a subject repository such as Arxiv. The librarian again is making no decisions on availability in collections. • E-book technology has enabled the introduction of so-called ‘patron selection’ or ‘patron driven acquisition’ (PDA). Suppliers of e-books are now offering libraries the opportunity to make available a fund to be spent on new e-book titles as they become popular with library users. PDA is becoming increasingly popular: a recent survey of 250 libraries in the USA showed that ‘32 have PDA programs deployed; 42 planned to have a program deployed within the next year; and an additional 90 plan to deploy a program within the next three years’ (http://www.libraries.wright.edu/noshelfrequired/?p=932). Librarians are able to impose some restrictions – for instance specifying subjects or ranges of titles; otherwise selection is taken out of the hands of librarians and entrusted to users . Initial statistics show the usage of many titles selected by users to be as high as the usage of titles selected by librarians or academics. • Google’s massive digitisation programme, although currently under legal threat, is another example. In the disintermediated world the librarian’s role is changing. It will in my view become increasingly focused not on externally produced resources, but on creating, developing and maintaining repositories of materials, whether learning objects, research data-sets or research outputs, produced in house in their own institution. Traditionally librarians have sought through the art of collection development to obtain the outputs of the world’s scholars and make them available to the scholars of their own institution – an impossible task. However our role is now being reversed: it will be to collect the outputs of our own institution’s scholars and make them freely available to the world. This task is capable of achievement and attains the aim of universal availability of scholarship to scholars. However it is not collection development as it has been practised down the years in the print world; that art, it can be argued, will no longer be needed in the era of disintermediation

    Harnessing the cognitive surplus of the nation: new opportunities for libraries in a time of change. The 2012 Jean Arnot Memorial Fellowship Essay.

    Get PDF
    This essay is the winner of the 2012 Jean Arnot Memorial Fellowship. The essay draws on Rose Holley's experience of managing innovative library services that engage crowds such as The Australian Newspapers Digitisation Program and Trove, and her ongoing research into library, archive and museum crowdsourcing projects. This experience and knowledge has been put into the context of Jean Arnot’s values and visions for Australian libraries. Jean Arnot, the distinguished Australian librarian, described her vision for an innovative library service over sixty years ago. Rose suggests how some of her goals are now being achieved through use of the internet and digital technologies, and how we can build on these to ensure that libraries remain valued and relevant by harnessing the cognitive surplus of the nation they serve, and by crowdsourcing

    A bibliographic metadata infrastructure for the twenty-first century

    Get PDF
    The current library bibliographic infrastructure was constructed in the early days of computers – before the Web, XML, and a variety of other technological advances that now offer new opportunities. General requirements of a modern metadata infrastructure for libraries are identified, including such qualities as versatility, extensibility, granularity, and openness. A new kind of metadata infrastructure is then proposed that exhibits at least some of those qualities. Some key challenges that must be overcome to implement a change of this magnitude are identified

    The Data Framework: A Collaborative Tool for Assessment at the UNLV Libraries

    Full text link
    Keeping track of the data that academic libraries capture is a massive task. The University of Nevada - Las Vegas (UNLV) University Libraries developed a data framework as a tracking tool for data points. This framework is both a data dictionary and a manual that records data-gathering procedures. This ensures that the data is continually gathered and reported in the same way, and also ensures that institutional memory of those procedures is preserved, regardless of staff turnover. Additionally, the revised Data Framework, and the revision process, transformed staff attitudes about data reporting and strengthened the libraries\u27 culture of assessment

    LIBER's involvement in supporting digital preservation in member libraries

    Get PDF
    Digital curation and preservation represent new challenges for universities. LIBER has invested considerable effort to engage with the new agendas of digital preservation and digital curation. Through two successful phases of the LIFE project, LIBER is breaking new ground in identifying innovative models for costing digital curation and preservation. Through LIFE’s input into the US-UK Blue Ribbon Task Force on Sustainable Digital Preservation and Access, LIBER is aligned with major international work in the economics of digital preservation. In its emerging new strategy and structures, LIBER will continue to make substantial contributions in this area, mindful of the needs of European research libraries
    corecore