268,066 research outputs found

    The Mirror MMDBMS architecture

    Get PDF
    Handling large collections of digitized multimedia data, usually referred to as multimedia digital libraries, is a major challenge for information technology. The Mirror DBMS is a research database system that is developed to better understand the kind of data management that is required in the context of multimedia digital libraries (see also URL http://www.cs.utwente.nl/~arjen/mmdb.html). Its main features are an integrated approach to both content management and (traditional) structured data management, and the implementation of an extensible object-oriented logical data model on a binary relational physical data model. The focus of this work is aimed at design for scalability

    Pathways: Augmenting interoperability across scholarly repositories

    Full text link
    In the emerging eScience environment, repositories of papers, datasets, software, etc., should be the foundation of a global and natively-digital scholarly communications system. The current infrastructure falls far short of this goal. Cross-repository interoperability must be augmented to support the many workflows and value-chains involved in scholarly communication. This will not be achieved through the promotion of single repository architecture or content representation, but instead requires an interoperability framework to connect the many heterogeneous systems that will exist. We present a simple data model and service architecture that augments repository interoperability to enable scholarly value-chains to be implemented. We describe an experiment that demonstrates how the proposed infrastructure can be deployed to implement the workflow involved in the creation of an overlay journal over several different repository systems (Fedora, aDORe, DSpace and arXiv).Comment: 18 pages. Accepted for International Journal on Digital Libraries special issue on Digital Libraries and eScienc

    Extracting, Transforming and Archiving Scientific Data

    Get PDF
    It is becoming common to archive research datasets that are not only large but also numerous. In addition, their corresponding metadata and the software required to analyse or display them need to be archived. Yet the manual curation of research data can be difficult and expensive, particularly in very large digital repositories, hence the importance of models and tools for automating digital curation tasks. The automation of these tasks faces three major challenges: (1) research data and data sources are highly heterogeneous, (2) future research needs are difficult to anticipate, (3) data is hard to index. To address these problems, we propose the Extract, Transform and Archive (ETA) model for managing and mechanizing the curation of research data. Specifically, we propose a scalable strategy for addressing the research-data problem, ranging from the extraction of legacy data to its long-term storage. We review some existing solutions and propose novel avenues of research.Comment: 8 pages, Fourth Workshop on Very Large Digital Libraries, 201

    The design and implementation of an infrastructure for multimedia digital libraries

    Get PDF
    We develop an infrastructure for managing, indexing and serving multimedia content in digital libraries. This infrastructure follows the model of the Web, and thereby is distributed in nature. We discuss the design of the Librarian, the component that manages meta data about the content. The management of meta data has been separated from the media servers that manage the content itself. Also, the extraction of the meta data is largely independent of the Librarian. We introduce our extensible data model and the daemon paradigm that are the core pieces of this architecture. We evaluate our initial implementation using a relational database. We conclude with a discussion of the lessons we learned in building this system, and proposals for improving the flexibility, reliability, and performance of the syste

    Using the Kano Model and the Quality of the Publishing Function in Evaluating Jordan's Academic Digital Libraries: A Case Study of the University of Jordan

    Get PDF
    This study is based on the evaluation of digital libraries of government universities in Jordan and it followed the descriptive and analytical approach based on the Kano model in assessing the requirements for the integration of digital libraries, which is called the "Home Model". The University of Jordan library was used as a case study and then circulated to other public universities for optimal use of digital libraries. A combination of two global models was used to set the Quality Function Deployment criteria. The Kano model as a useful tool for evaluating the quality of services provided by institutions without any consideration for their type. Data was collected to feed the quality of the publishing job-Kanoo model through a questionnaire that was distributed to a sample of users at the University of Jordan to assess the extent to which the services required from the digital libraries were achieved. Because this type of studies relies on listening to the voice of the user (Voice of User), which is the main goal and goal in providing services by institutions, especially libraries, and through this, valuable and relevant information on issues that should be improved In order to increase the users' satisfaction and complementarity in meeting their needs. DOI: 10.7176/IKM/10-4-01 Publication date:May 31st 202

    A model for enhancing digital transformation through technology‑related continuing professional development activities in academic libraries in context

    Get PDF
    This paper is based on the findings of a doctoral study that aimed to examine the role of continuing professional development (CPD) in enhancing digital transformation in selected university libraries in Uganda. One of the ways of effecting digital transformation is to continuously build the technological competencies of the librarians working in academic institutions through attending technology-related CPD. The study adopted a mixed methods approach with a convergent parallel design for collecting qualitative and quantitative data from six universities in Uganda. Quantitative data were collected from 76 librarians with a minimum degree-level qualification from the six selected universities. Qualitative data were obtained from six University Librarians working in these universities. The study findings indicated several challenges hindering librarians from participating in technology-related CPDs such as lack of management support, lack of personal interest, limited funding, and lack of opportunities, among others. The implementation of digital transformation within university libraries in Uganda was also reported to be beset by a lack of competent staff, limited management support, lack of funds, and technological gaps. Therefore, this paper presents a proposed model to address challenges hindering the digital transformation and the participation in technology-related continuing professional development activities within academic libraries. The proposed model is based on the study findings, and it draws from Watkin and Marsick’s learning organisation model, andragogy theory, the technology-organisation-environment framework, and extant literature. The model will guide academic libraries in the implementation of a conducive environment to necessitate staff development and implementation of digital transformation

    Modelling text-fact-integration in digital libraries

    Full text link
    Digital Libraries currently face the challenge of integrating many different types of research information (e.g. publications, primary data, expert‘s profiles, institutional profiles, project information etc.) according to their scientific users‘ needs. To date no general, integrated model for knowledge organization and retrieval in Digital Libraries exists. This causes the problem of structural and semantic heterogeneity due to the wide range of metadata standards, indexing vocabularies and indexing approaches used for different types of information. The research presented in this paper focuses on areas in which activities are being undertaken in the field of Digital Libraries in order to treat semantic interoperability problems. We present a model for the integrated retrieval of factual and textual data which combines multiple approaches to semantic interoperability und sets them into context. Embedded in the research cycle, traditional content indexing methods for publications meet the newer, but rarely used ontology-based approaches which seem to be better suited for representing complex information like the one contained in survey data. The benefits of our model are (1) easy re-use of available knowledge organisation systems and (2) reduced efforts for domain modelling with ontologies. (author's abstract

    New H-band Stellar Spectral Libraries for the SDSS-III/APOGEE survey

    Get PDF
    The Sloan Digital Sky Survey--III (SDSS--III) Apache Point Observatory Galactic Evolution Experiment (APOGEE) has obtained high resolution (R \sim 22,500), high signal-to-noise ratio (>> 100) spectra in the H-band (\sim1.5-1.7 μ\mum) for about 146,000 stars in the Milky Way galaxy. We have computed spectral libraries with effective temperature (TeffT\rm{_{eff}}) ranging from 3500 to 8000 K for the automated chemical analy\-sis of the survey data. The libraries, used to derive stellar parameters and abundances from the APOGEE spectra in the SDSS--III data release 12 (DR12), are based on ATLAS9 model atmospheres and the ASSϵ\epsilonT spectral synthesis code. We present a second set of libraries based on MARCS model atmospheres and the spectral synthesis code Turbospectrum. The ATLAS9/ASSϵ\epsilonT (TeffT\rm{_{eff}} = 3500-8000 K) and MARCS/Turbospectrum (TeffT\rm{_{eff}} = 3500-5500 K) grids cover a wide range of metallicity (-2.5 \leq [M/H] \leq ++0.5 dex), surface gravity (0 \leq log gg \leq 5 dex), microturbulence (0.5 \leq ξ\xi \leq 8 km~s1^{-1}), carbon (-1 \leq [C/M] \leq ++1 dex), nitrogen (-1 \leq [N/M] \leq ++1 dex), and α\alpha-element (-1 \leq [α\alpha/M] \leq ++1 dex) variations, having thus seven dimensions. We compare the ATLAS9/ASSϵ\epsilonT and MARCS/Turbospectrum libraries and apply both of them to the analysis of the observed H-band spectra of the Sun and the K2 giant Arcturus, as well as to a selected sample of well-known giant stars observed at very high-resolution. The new APOGEE libraries are publicly available and can be employed for chemical studies in the H-band using other high-resolution spectrographs.Comment: 45 pages, 11 figures; accepted for publication in the Astronomical Journa

    Publishing E-resources of Digital Institutional Repository as Linked Open Data: an experimental study

    Get PDF
    Linked open data (LOD) is an essential component in semantic web architecture and is becoming increasingly important over time due to its ability to share and re-use structured data which is both human and computer readable over the web. Currently, many libraries, archives, museums etc. are using open source digital library software to manage and preserve their digital collections. They may also intend to publish their e-resources as “Linked Open Datasets” for further usage. LOD enables the libraries or information centers to publish and share the structured metadata that is generated and maintained with their own bibliographic and authority data in such a way that the other libraries and general community across the world can consume, interact, enrich and share. In this context, the key issue is to convert the library bibliographic data which is commonly known as metadata into LOD dataset. The purpose of this paper is to provide a methodology and technical aspects to design and publish a structured LOD dataset of bibliographic information from a digital repository developed with DSpace digital library software so that other libraries can link their repositories with these LOD for providing additional relevant resources to their end-users. The paper shows the process of integration and configuration of Apache Jena Fuseki (a tool for SPARQL Endpoint interface) with DSpace for converting metadata into Resource Description Framework (RDF) triple model and make them available in various RDF formats. It also discusses a model for building a LOD framework to convert and store RDF graph and RDF triple. Finally, it tests the accessibility of the inked open dataset by querying RDF data through a SPARQL endpoint interface

    An Introduction to the Open Archives Initiative Object Reuse and Exchange (OAI-ORE)

    Get PDF
    The group behind the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) has recently released a beta specification for a new protocol, entitled Open Archives Initiative Object Reuse and Exchange (OAI-ORE). OAI-ORE "defines standards for the description and exchange of aggregations of Web resources," a need commonly faced by digital libraries. This presentation will provide an introduction to the OAI-ORE data model and serializations of OAI-ORE "resource maps" in Atom and RDF. It will also discuss the movement towards data sharing by digital libraries using mechanisms native to the Web rather than in library-centric, high-value and low adoption protocols
    corecore