4,065 research outputs found

    Simplifying resource discovery and access in academic libraries : implementing and evaluating Summon at Huddersfield and Northumbria Universities

    Get PDF
    Facilitating information discovery and maximising value for money from library materials is a key driver for academic libraries, which spend substantial sums of money on journal, database and book purchasing. Users are confused by the complexity of our collections and the multiple platforms to access them and are reluctant to spend time learning about individual resources and how to use them - comparing this unfavourably to popular and intuitive search engines like Google. As a consequence the library may be seen as too complicated and time consuming and many of our most valuable resources remain undiscovered and underused. Federated search tools were the first commercial products to address this problem. They work by using a single search box to interrogate multiple databases (including Library catalogues) and journal platforms. While going some way to address the problem, many users complained that they were still relatively slow, clunky and complicated to use compared to Google or Google Scholar. The emergence of web-scale discovery services in 2009 promised to deal with some of these problems. By harvesting and indexing metadata direct from publishers and local library collections into a single index they facilitate resource discovery and access to multiple library collections (whether in print or electronic form) via a single search box. Users no longer have to negotiate a number of separate platforms to find different types of information and because the data is held in a single unified index searching is fast and easy. In 2009 both Huddersfield and Northumbria Universities purchased Serials Solutions Summon. This case study report describes the selection, implementation and testing of Summon at both Universities drawing out common themes as well as differences; there are suggestions for those who intend to implement Summon in the future and some suggestions for future development

    Identity Management in Information Age Government: Exploring Concepts, Definitions, Approaches and Solutions

    No full text
    Our research question is the following: What could be a useful working definition of Identity Management in government at present? a) What are conceptualisations, definitions and approaches of IDM in government according to academic literature? b) Which e-authentication solutions have been developed in other jurisdictions

    Integration of Legacy and Heterogeneous Databases

    Get PDF

    HBIM as support of preventive conservation actions in heritage Architecture: experience of the Renaissance Quadrant Façade of the Cathedral of Seville

    Get PDF
    This paper discusses the generation of Historic Building Information Models (HBIM) for the management of heritage information aimed at the preventive conservation of assets of cultural interest, through its experimentation in a specific case study: the façade of the Renaissance quadrant of the Cathedral of Seville. Two methodological aspects are presented: On the one hand,the process of modeling the solid entities that compose the digital model of the object of study, basedon the semi-automatic estimation of the generating surfaces of the main faces; on the other hand,a methodological proposal for the modeling of information on the surface of the model. A series ofimages and data tables are shown as a result of the application of these methods. These representthe process of introducing information related to the current conservation status documentation and recording the treatments included in the preventive conservation works recently developedby a specialized company. The implementation of the digital model in the exposed work validatesit as a solvency option, provided from the infographic medium, when facing the need to contain,manage and visualize all the information generated in preventive conservation actions on heritage architecture, facilitating, in turn, cross-cutting relationships between the different analysis that resultin a deeper knowledge of this type of buildin

    The NASA Astrophysics Data System: Architecture

    Full text link
    The powerful discovery capabilities available in the ADS bibliographic services are possible thanks to the design of a flexible search and retrieval system based on a relational database model. Bibliographic records are stored as a corpus of structured documents containing fielded data and metadata, while discipline-specific knowledge is segregated in a set of files independent of the bibliographic data itself. The creation and management of links to both internal and external resources associated with each bibliography in the database is made possible by representing them as a set of document properties and their attributes. To improve global access to the ADS data holdings, a number of mirror sites have been created by cloning the database contents and software on a variety of hardware and software platforms. The procedures used to create and manage the database and its mirrors have been written as a set of scripts that can be run in either an interactive or unsupervised fashion. The ADS can be accessed at http://adswww.harvard.eduComment: 25 pages, 8 figures, 3 table

    Content warehouses

    Get PDF
    Nowadays, content management systems are an established technology. Based on the experiences from several application scenarios we discuss the points of contact between content management systems and other disciplines of information systems engineering like data warehouses, data mining, and data integration. We derive a system architecture called "content warehouse" that integrates these technologies and defines a more general and more sophisticated view on content management. As an example, a system for the collection, maintenance, and evaluation of biological content like survey data or multimedia resources is shown as a case study

    TEMPOS: A Platform for Developing Temporal Applications on Top of Object DBMS

    Get PDF
    This paper presents TEMPOS: a set of models and languages supporting the manipulation of temporal data on top of object DBMS. The proposed models exploit object-oriented technology to meet some important, yet traditionally neglected design criteria related to legacy code migration and representation independence. Two complementary ways for accessing temporal data are offered: a query language and a visual browser. The query language, namely TempOQL, is an extension of OQL supporting the manipulation of histories regardless of their representations, through fully composable functional operators. The visual browser offers operators that facilitate several time-related interactive navigation tasks, such as studying a snapshot of a collection of objects at a given instant, or detecting and examining changes within temporal attributes and relationships. TEMPOS models and languages have been formalized both at the syntactical and the semantical level and have been implemented on top of an object DBMS. The suitability of the proposals with regard to applications' requirements has been validated through concrete case studies

    Virtual Knowledge Graphs: An Overview of Systems and Use Cases

    Get PDF
    In this paper, we present the virtual knowledge graph (VKG) paradigm for data integration and access, also known in the literature as Ontology-based Data Access. Instead of structuring the integration layer as a collection of relational tables, the VKG paradigm replaces the rigid structure of tables with the flexibility of graphs that are kept virtual and embed domain knowledge. We explain the main notions of this paradigm, its tooling ecosystem and significant use cases in a wide range of applications. Finally, we discuss future research directions

    Climate Change and Biosphere Response: Unlocking the Collections Vault

    No full text
    Natural history collections (NHCs) are an important source of the long-term data needed to understand how biota respond to ongoing anthropogenic climate change. These include taxon occurrence data for ecological modeling, as well as information that can be used to reconstruct mechanisms through which biota respond to changing climates. The full potential of NHCs for climate change research cannot be fully realized until high-quality data sets are conveniently accessible for research, but this requires that higher priority be placed on digitizing the holdings most useful for climate change research (e.g., whole-biota studies, time series, records of intensively sampled common taxa). Natural history collections must not neglect the proliferation of new information from efforts to understand how present-day ecosystems are responding to environmental change. These new directions require a strategic realignment for many NHC holders to complement their existing focus on taxonomy and systematics. To set these new priorities, we need strong partnerships between NHC holders and global change biologists

    Database Integration: the Key to Data Interoperability

    Get PDF
    Most of new databases are no more built from scratch, but re-use existing data from several autonomous data stores. To facilitate application development, the data to be re-used should preferably be redefined as a virtual database, providing for the logical unification of the underlying data sets. This unification process is called database integration. This chapter provides a global picture of the issues raised and the approaches that have been proposed to tackle the problem
    • …
    corecore