20,251 research outputs found

    mSpace meets EPrints: a Case Study in Creating Dynamic Digital Collections

    No full text
    In this case study we look at issues involved in (a) generating dynamic digital libraries that are on a particular topic but span heterogeneous collections at distinct sites, (b) supplementing the artefacts in that collection with additional information available either from databases at the artefact's home or from the Web at large, and (c) providing an interaction paradigm that will support effective exploration of this new resource. We describe how we used two available frameworks, mSpace and EPrints to support this kind of collection building. The result of the study is a set of recommendations to improve the connectivity of remote resources both to one another and to related Web resources, and that will also reduce problems like co-referencing in order to enable the creation of new collections on demand

    HaIRST: Harvesting Institutional Resources in Scotland Testbed. Final Project Report

    Get PDF
    The HaIRST project conducted research into the design, implementation and deployment of a pilot service for UK-wide access of autonomously created institutional resources in Scotland, the aim being to investigate and advise on some of the technical, cultural, and organisational requirements associated with the deposit, disclosure, and discovery of institutional resources in the JISC Information Environment. The project involved a consortium of Scottish higher and further education institutions, with significant assistance from the Scottish Library and Information Council. The project investigated the use of technologies based on the Open Archives Initiative (OAI), including the implementation of OAI-compatible repositories for metadata which describe and link to institutional digital resources, the use of the OAI protocol for metadata harvesting (OAI-PMH) to automatically copy the metadata from multiple repositories to a central repository, and the creation of a service to search and identify resources described in the central repository. An important aim of the project was to identify issues of metadata interoperability arising from the requirements of individual institutional repositories and their impact on services based on the aggregation of metadata through harvesting. The project also sought to investigate issues in using these technologies for a wide range of resources including learning, teaching and administrative materials as well as the research and scholarly communication materials considered by many of the other projects in the JISC Focus on Access to Institutional Resources (FAIR) Programme, of which HaIRST was a part. The project tested and implemented a number of open source software packages supporting OAI, and was successful in creating a pilot service which provides effective information retrieval of a range of resources created by the project consortium institutions. The pilot service has been extended to cover research and scholarly communication materials produced by other Scottish universities, and administrative materials produced by a non-educational institution in Scotland. It is an effective testbed for further research and development in these areas. The project has worked extensively with a new OAI standard for 'static repositories' which offers a low-barrier, low-cost mechanism for participation in OAI-based consortia by smaller institutions with a low volume of resources. The project identified and successfully tested tools for transforming pre-existing metadata into a format compliant with OAI standards. The project identified and assessed OAI-related documentation in English from around the world, and has produced metadata for retrieving and accessing it. The project created a Web-based advisory service for institutions and consortia. The OAI Scotland Information Service (OAISIS) provides links to related standards, guidance and documentation, and discusses the findings of HaIRST relating to interoperability and the pilot harvesting service. The project found that open source packages relating to OAI can be installed and made to interoperate to create a viable method of sharing institutional resources within a consortium. HaIRST identified issues affecting the interoperability of shared metadata and suggested ways of resolving them to improve the effectiveness and efficiency of shared information retrieval environments based on OAI. The project demonstrated that application of OAI technologies to administrative materials is an effective way for institutions to meet obligations under Freedom of Information legislation

    DRIVER Technology Watch Report

    Get PDF
    This report is part of the Discovery Workpackage (WP4) and is the third report out of four deliverables. The objective of this report is to give an overview of the latest technical developments in the world of digital repositories, digital libraries and beyond, in order to serve as theoretical and practical input for the technical DRIVER developments, especially those focused on enhanced publications. This report consists of two main parts, one part focuses on interoperability standards for enhanced publications, the other part consists of three subchapters, which give a landscape picture of current and surfacing technologies and communities crucial to DRIVER. These three subchapters contain the GRID, CRIS and LTP communities and technologies. Every chapter contains a theoretical explanation, followed by case studies and the outcomes and opportunities for DRIVER in this field

    Invest to Save: Report and Recommendations of the NSF-DELOS Working Group on Digital Archiving and Preservation

    Get PDF
    Digital archiving and preservation are important areas for research and development, but there is no agreed upon set of priorities or coherent plan for research in this area. Research projects in this area tend to be small and driven by particular institutional problems or concerns. As a consequence, proposed solutions from experimental projects and prototypes tend not to scale to millions of digital objects, nor do the results from disparate projects readily build on each other. It is also unclear whether it is worthwhile to seek general solutions or whether different strategies are needed for different types of digital objects and collections. The lack of coordination in both research and development means that there are some areas where researchers are reinventing the wheel while other areas are neglected. Digital archiving and preservation is an area that will benefit from an exercise in analysis, priority setting, and planning for future research. The WG aims to survey current research activities, identify gaps, and develop a white paper proposing future research directions in the area of digital preservation. Some of the potential areas for research include repository architectures and inter-operability among digital archives; automated tools for capture, ingest, and normalization of digital objects; and harmonization of preservation formats and metadata. There can also be opportunities for development of commercial products in the areas of mass storage systems, repositories and repository management systems, and data management software and tools.

    LODE: Linking Digital Humanities Content to the Web of Data

    Full text link
    Numerous digital humanities projects maintain their data collections in the form of text, images, and metadata. While data may be stored in many formats, from plain text to XML to relational databases, the use of the resource description framework (RDF) as a standardized representation has gained considerable traction during the last five years. Almost every digital humanities meeting has at least one session concerned with the topic of digital humanities, RDF, and linked data. While most existing work in linked data has focused on improving algorithms for entity matching, the aim of the LinkedHumanities project is to build digital humanities tools that work "out of the box," enabling their use by humanities scholars, computer scientists, librarians, and information scientists alike. With this paper, we report on the Linked Open Data Enhancer (LODE) framework developed as part of the LinkedHumanities project. With LODE we support non-technical users to enrich a local RDF repository with high-quality data from the Linked Open Data cloud. LODE links and enhances the local RDF repository without compromising the quality of the data. In particular, LODE supports the user in the enhancement and linking process by providing intuitive user-interfaces and by suggesting high-quality linking candidates using tailored matching algorithms. We hope that the LODE framework will be useful to digital humanities scholars complementing other digital humanities tools

    The aDORe federation architecture: digital repositories at scale

    Get PDF

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    An International Prospectus for Library & Information Professionals: Development, Leadership and Resources for Evolving Patron Needs

    Get PDF
    The roles of library and information professionals must change and evolve to: 1. accommodate needs of tech-savvy patrons; 2. thrive in the Commons & Library 2.0; 3. provide integrated, just-in-time services; 4. constantly update and enhance technology; 5. design appropriate library spaces for research and productivity; 6.adapt to new models of scholarly communication and publication, especially: the Open Archives Initiative and digital repositories; 7. remain abreast of national and interanational academic and legislative initiatives affecting the provision of information services and resources. Professionals will need to collaborate in: 1. Formal & informal networks – regional, national, and international; and; 2. Library staff development initiatives – regional, national, international Professionals will need to use libraries as laboratories for ongoing, lifelong training and education of patrons and of all library staff ( internal patrons ): the library is the framework in which Information Research Literacy is the curriculum . Professionals will need to remain aware of trends and challenges in their regions, the EU, the US and North America, of models which might provide inspiration and support: 1. Top Technology Trends; 2. New paradigms of professionalism; 3. Knowledge-creation and knowledge consumption; 4. The shifting balance of the physical library with the virtual-digital librar
    • …
    corecore