1,571 research outputs found

    Managing access to the internet in public libraries in the UK: the findings of the MAIPLE project

    Get PDF
    One of the key purposes of the public library is to provide access to information (UNESCO, 1994). In the UK, information is provided in printed formats and for the last decade via public access Internet workstations installed as part of the People’s Network initiative. Recent figures reveal that UK public libraries provide approximately 40,000 computer terminals offering users around 80,000 hours across more than 4,000 service points (CIPFA, 2012). In addition, increasing numbers of public libraries allow users to connect devices such as tablets or smart phones to the Internet via a wireless network access point (Wi-Fi). How do public library staff manage this? What about users viewing harmful or illegal content? And what are the implications for a profession committed to freedom of access to information and opposition to censorship? MAIPLE, a two-year project funded by the Arts and Humanities Research Council has been investigating this issue as little was known about how UK public libraries manage Internet content control including illegal material. MAIPLE has drawn on an extensive review of the literature, an online survey to which all UK public library services were invited to participate (39 per cent response rate) and case studies with five services (two in England, one in Scotland, one in Wales and one in Northern Ireland) to examine the ways these issues are managed and their implications for staff. This presentation will explore the prevalence of tools such as filtering software, Acceptable Use Policies, user authentication, booking software and visual monitoring by staff and consider their efficacy and desirability in the provision of public Internet access. It will consider the professional dilemmas inherent within managing content and access. Finally, it will highlight some of the more important themes emerging from the findings and their implications for practitioners and policy makers

    Expressing the tacit knowledge of a digital library system as linked data

    Get PDF
    Library organizations have enthusiastically undertaken semantic web initiatives and in particular the data publishing as linked data. Nevertheless, different surveys report the experimental nature of initiatives and the consumer difficulty in re-using data. These barriers are a hindrance for using linked datasets, as an infrastructure that enhances the library and related information services. This paper presents an approach for encoding, as a Linked Vocabulary, the "tacit" knowledge of the information system that manages the data source. The objective is the improvement of the interpretation process of the linked data meaning of published datasets. We analyzed a digital library system, as a case study, for prototyping the "semantic data management" method, where data and its knowledge are natively managed, taking into account the linked data pillars. The ultimate objective of the semantic data management is to curate the correct consumers' interpretation of data, and to facilitate the proper re-use. The prototype defines the ontological entities representing the knowledge, of the digital library system, that is not stored in the data source, nor in the existing ontologies related to the system's semantics. Thus we present the local ontology and its matching with existing ontologies, Preservation Metadata Implementation Strategies (PREMIS) and Metadata Objects Description Schema (MODS), and we discuss linked data triples prototyped from the legacy relational database, by using the local ontology. We show how the semantic data management, can deal with the inconsistency of system data, and we conclude that a specific change in the system developer mindset, it is necessary for extracting and "codifying" the tacit knowledge, which is necessary to improve the data interpretation process

    Digitization Projects of Documentary Collections in Academic Libraries

    Get PDF
    This paper is focused on digitization projects of documentary collections in academic libraries. The aim of the work is to suggest an evaluation of digitization projects by using a set of parameters deduced by the observation of national and international models. To create this evaluation scheme it has been necessary to look at the recent national and international academic literature and compare different case studies. The parameters were created by thinking about the whole process of digitization and also taking into consideration an user centred evaluation. The created evaluation scheme has been tested on a sample of digitization projects of Italian, European and American academic libraries. With this kind of analysis it has been possible to check the validity of the evaluation scheme created, to identify points of strength and of weakness within the Italian system and to compare it with the international best practices analyzed

    Generating collaborative systems for digital libraries: A model-driven approach

    Get PDF
    This is an open access article shared under a Creative Commons Attribution 3.0 Licence (http://creativecommons.org/licenses/by/3.0/). Copyright @ 2010 The Authors.The design and development of a digital library involves different stakeholders, such as: information architects, librarians, and domain experts, who need to agree on a common language to describe, discuss, and negotiate the services the library has to offer. To this end, high-level, language-neutral models have to be devised. Metamodeling techniques favor the definition of domainspecific visual languages through which stakeholders can share their views and directly manipulate representations of the domain entities. This paper describes CRADLE (Cooperative-Relational Approach to Digital Library Environments), a metamodel-based framework and visual language for the definition of notions and services related to the development of digital libraries. A collection of tools allows the automatic generation of several services, defined with the CRADLE visual language, and of the graphical user interfaces providing access to them for the final user. The effectiveness of the approach is illustrated by presenting digital libraries generated with CRADLE, while the CRADLE environment has been evaluated by using the cognitive dimensions framework

    Piattaforme digitali per la pubblicazione di contenuti di ricerca: esperienze, modelli open access, tendenze

    Get PDF
    The paper deals with the issue of digital library publishing services developed by academic libraries. Digital publishing platforms and institutional repositories support the open access paradigm but their relations with university press is still controversial and need to be better explore

    The Rare Book Collection of the Library of the Italian National Institute of Health: from the past to the present

    Get PDF
    The Library of the Istituto Superiore di Sanità (ISS), the Italian National Institute of Health, is the main library for public health and biomedical research in Italy and holds a small but valuable special collection of ancient books. Known as the Rare Book Collection, this fund consists of over 1200 scientific printed volumes published between the XVI and the XIX century. The purpose of this paper is to illustrate the challenges and the process undertaken by the Library to share and digitalize this Collection

    The archaeological Atlas of Coptic Literature. A question of method

    Get PDF
    PAThs project is aimed at creating an online archaeological atlas of Coptic literature by providing for the very first time a detailed catalogue of ancient books and their archaeological and cultural context, following a multidisciplinary approach and cutting edge methodologie

    A set of nine principles for distributed-design information storing

    Get PDF
    The issues of distributed working are many, with problems relating to information access and information acquisition the most common (Crabtree et al., 1997). Keeping track of project and team information is becoming more complex as design is increasingly being carried out collaboratively by geographically dispersed design teams across different time zones. The literature notes that little prescription or guidance exists on information management for designers (Culley et al., 1999) and Hicks (2007) highlights a relative lack of overall principles for improving information management. Additionally, evidence from earlier studies by the author into ‘How information is stored in distributed design project work’ reinforces the need for guidance, particularly in a distributed context (Grierson, 2008). Distributed information collections were found to be unorganised, contained unclear information and lacked context. Storing and sharing of distributed information was often time consuming and the tools awkward to use. This can lead to poor project progress and can impact directly on the quality and success of project outcomes (Grierson et al., 2004, 2006). This paper seeks to address these issues by presenting the development, implementation and evaluation of a set of Principles and a Framework to support distributed design information storing in the context of a Global Design class. Through both quantitative and qualitative evaluation methods the Principles were found to help in a number of ways – with the easy access of information; the structuring and organising of information; the creation of an information strategy; the making of information clear and concise; the supporting of documentation during project work; and the strengthening of team work; all helping teams to work towards project outcomes

    Bibliographic Control in the Digital Ecosystem

    Get PDF
    With the contributions of international experts, the book aims to explore the new boundaries of universal bibliographic control. Bibliographic control is radically changing because the bibliographic universe is radically changing: resources, agents, technologies, standards and practices. Among the main topics addressed: library cooperation networks; legal deposit; national bibliographies; new tools and standards (IFLA LRM, RDA, BIBFRAME); authority control and new alliances (Wikidata, Wikibase, Identifiers); new ways of indexing resources (artificial intelligence); institutional repositories; new book supply chain; “discoverability” in the IIIF digital ecosystem; role of thesauri and ontologies in the digital ecosystem; bibliographic control and search engines
    corecore