5,538 research outputs found
Pathways: Augmenting interoperability across scholarly repositories
In the emerging eScience environment, repositories of papers, datasets,
software, etc., should be the foundation of a global and natively-digital
scholarly communications system. The current infrastructure falls far short of
this goal. Cross-repository interoperability must be augmented to support the
many workflows and value-chains involved in scholarly communication. This will
not be achieved through the promotion of single repository architecture or
content representation, but instead requires an interoperability framework to
connect the many heterogeneous systems that will exist.
We present a simple data model and service architecture that augments
repository interoperability to enable scholarly value-chains to be implemented.
We describe an experiment that demonstrates how the proposed infrastructure can
be deployed to implement the workflow involved in the creation of an overlay
journal over several different repository systems (Fedora, aDORe, DSpace and
arXiv).Comment: 18 pages. Accepted for International Journal on Digital Libraries
special issue on Digital Libraries and eScienc
Sharing Linkable Learning Objects with the use of Metadata and a Taxonomy Assistant for Categorization
In this work, a re-design of the Moodledata module functionalities is
presented to share learning objects between e-learning content platforms, e.g.,
Moodle and G-Lorep, in a linkable object format. The e-learning courses content
of the Drupal-based Content Management System G-Lorep for academic learning is
exchanged designing an object incorporating metadata to support the reuse and
the classification in its context. In such an Artificial Intelligence
environment, the exchange of Linkable Learning Objects can be used for dialogue
between Learning Systems to obtain information, especially with the use of
semantic or structural similarity measures to enhance the existent Taxonomy
Assistant for advanced automated classification
Illinois Digital Scholarship: Preserving and Accessing the Digital Past, Present, and Future
Since the University's establishment in 1867, its scholarly output has been issued primarily in print, and the University Library and Archives have been readily able to collect, preserve, and to provide access to that output. Today, technological, economic, political and social forces are buffeting all means of scholarly communication. Scholars, academic institutions and publishers are engaged in debate about the impact of digital scholarship and open access publishing on the promotion and tenure process. The upsurge in digital scholarship affects many aspects of the academic enterprise, including how we record, evaluate, preserve, organize and disseminate scholarly work. The result has left the Library with no ready means by which to archive digitally produced publications, reports, presentations, and learning objects, much of which cannot be adequately represented in print form. In this incredibly fluid environment of digital scholarship, the critical question of how we will collect, preserve, and manage access to this important part of the University scholarly record demands a rational and forward-looking plan - one that includes perspectives from diverse scholarly disciplines, incorporates significant research breakthroughs in information science and computer science, and makes effective projections for future integration within the Library and computing services as a part of the campus infrastructure.Prepared jointly by the University of Illinois Library and CITES at the University of Illinois at Urbana-Champaig
Contexts and Contributions: Building the Distributed Library
This report updates and expands on A Survey of Digital Library Aggregation Services, originally commissioned by the DLF as an internal report in summer 2003, and released to the public later that year. It highlights major developments affecting the ecosystem of scholarly communications and digital libraries since the last survey and provides an analysis of OAI implementation demographics, based on a comparative review of repository registries and cross-archive search services. Secondly, it reviews the state-of-practice for a cohort of digital library aggregation services, grouping them in the context of the problem space to which they most closely adhere. Based in part on responses collected in fall 2005 from an online survey distributed to the original core services, the report investigates the purpose, function and challenges of next-generation aggregation services. On a case-by-case basis, the advances in each service are of interest in isolation from each other, but the report also attempts to situate these services in a larger context and to understand how they fit into a multi-dimensional and interdependent ecosystem supporting the worldwide community of scholars. Finally, the report summarizes the contributions of these services thus far and identifies obstacles requiring further attention to realize the goal of an open, distributed digital library system
Recommended from our members
ICOPER Project - Deliverable 4.3 ISURE: Recommendations for extending effective reuse, embodied in the ICOPER CD&R
The purpose of this document is to capture the ideas and recommendations, within and beyond the ICOPER community, concerning the reuse of learning content, including appropriate methodologies as well as established strategies for remixing and repurposing reusable resources. The overall remit of this work focuses on describing the key issues that are related to extending effective reuse embodied in such materials. The objective of this investigation, is to support the reuse of learning content whilst considering how it could be originally created and then adapted with that ‘reuse’ in mind. In these circumstances a survey on effective reuse best practices can often provide an insight into the main challenges and benefits involved in the process of creating, remixing and repurposing what we are now designating as Reusable Learning Content (RLC).
Several key issues are analysed in this report: Recommendations for extending effective reuse, building upon those described in the previous related deliverables 4.1 Content Development Methodologies and 4.2 Quality Control and Web 2.0 technologies. The findings of this current survey, however, provide further recommendations and strategies for using and developing this reusable learning content. In the spirit of ‘reuse’, this work also aims to serve as a foundation for the many different stakeholders and users within, and beyond, the ICOPER community who are interested in reusing learning resources.
This report analyses a variety of information. Evidence has been gathered from a qualitative survey that has focused on the technical and pedagogical recommendations suggested by a Special Interest Group (SIG) on the most innovative practices with respect to new media content authors (for content authoring or modification) and course designers (for unit creation). This extended community includes a wider collection of OER specialists. This collected evidence, in the form of video and audio interviews, has also been represented as multimedia assets potentially helpful for learning and useful as learning content in the New Media Space (See section 4 for further details).
Section 2 of this report introduces the concept of reusable learning content and reusability. Section 3 discusses an application created by the ICOPER community to enhance the opportunities for developing reusable content. Section 4 of this report provides an overview of the methodology used for the qualitative survey. Section 5 presents a summary of thematic findings. Section 6 highlights a list of recommendations for effective reuse of educational content, which were derived from thematic analysis described in Appendix A. Finally, section 7 summarises the key outcomes of this work
The OBAA Standard for Developing Repositories of Learning Objects: the Case of Ocean Literacy in Azores
This paper describes the existing web resources of learning objects to promote ocean
literacy. The several projects and sites are explored, and the shortcomings revealed. The
limitations identified include insufficient metadata about registered learning objects and
lack of support for intelligent applications. As solution, we promote the seaThings project
that relies on a multi-disciplinary approach to promote literacy in the marine environment
by implementing a specific Learning Objects repositories (LOR) and a federation of
repositories (FED), supported by a OBAA, a versatile and innovative standard that will
provide the necessary support for intelligent applications for education purposes, to be used
in schools and other educational institutions.info:eu-repo/semantics/publishedVersio
Metadata quality issues in learning repositories
Metadata lies at the heart of every digital repository project in the sense that it defines and drives the description of digital content stored in the repositories. Metadata allows content to be successfully stored, managed and retrieved but also preserved in the long-term. Despite the enormous importance of metadata in digital repositories, one that is widely recognized, studies indicate that what is defined as metadata quality, is relatively low in most cases of digital repositories. Metadata quality is loosely defined as "fitness for purpose" meaning that low quality of metadata means that metadata cannot fulfill its purpose which is to allow for the successful storage, management and retrieval of resources. In practice, low metadata quality leads to ineffective searches for content, ones that recall the wrong resources or even worse, no resources which makes them invisible to the intended user, that is the "client" of each digital repository. The present dissertation approaches this problem by proposing a comprehensive metadata quality assurance method, namely the Metadata Quality Assurance Certification Process (MQACP). The basic idea of this dissertation is to propose a set of methods that can be deployed throughout the lifecycle of a repository to ensure that metadata generated from content providers are of high quality. These methods have to be straightforward, simple to apply with measurable results. They also have to be adaptable with minimum effort so that they can be used in different contexts easily. This set of methods was described analytically, taking into account the actors needed to apply them, describing the tools needed and defining the anticipated outcomes. In order to test our proposal, we applied it on a Learning Federation of repositories, from day 1 of its existence until it reached its maturity and regular operation. We supported the metadata creation process throughout the different phases of the repositories involved by setting up specific experiments using the methods and tools of the MQACP. Throughout each phase, we measured the resulting metadata quality to certify that the anticipated improvement in metadata quality actually took place. Lastly, through these different phases, the cost of the MQACP application was measured to provide a comparison basis for future applications. Based on the success of this first application, we decided to validate the MQACP approach by applying it on another two cases of a Cultural and a Research Federation of repositories. This would allow us to prove the transferability of the approach to other cases the present some similarities with the initial one but mainly significant differences. The results showed that the MQACP was successfully adapted to the new contexts, with minimum adaptations needed, with similar results produced and also with comparable costs. In addition, looking closer at the common experiments carried out in each phase of each use case, we were able to identify interesting patterns in the behavior of content providers that can be further researched. The dissertation is completed with a set of future research directions that came out of the cases examined. These research directions can be explored in order to support the next version of the MQACP in terms of the methods deployed, the tools used to assess metadata quality as well as the cost analysis of the MQACP methods
- …