36,703 research outputs found

    ICE-TheOREM - End to End Semantically Aware eResearch Infrastructure for Theses

    Get PDF
    4th International Conference on Open RepositoriesThis presentation was part of the session : Conference PresentationsDate: 2009-05-19 10:00 AM – 11:30 AMICE-TheOREM was a project which made several important contributions to the repository domain, promoting deposit by integrating the repository with authoring workflows and enhancing open access, by adding new infrastructure to allow fine-grained embargo management within an institution without impacting on existing open access repository infrastructure. In the area of scholarly communications workflows, the project produced a complete end-to-end demonstration of eScholarship for word processor users, with tools for authoring, managing and disseminating semantically-rich thesis documents fully integrated with supporting data. This work is focused on theses, as it is well understood that early career researchers are the most likely to lead the charge in new innovations in scholarly publishing and dissemination models. The authoring tools are built on the ICE content management system, which allows authors to work within a word processing system (as most authors do) with easy-to-use toolbars to structure and format their documents. The ICE system manages both small data files and links to larger data sets. The result is research publication which are available not just as paper-ready PDF files but as fully interactive semantically aware web documents which can be disseminated via repository software such as ePrints, DSpace and Fedora as complete supported web-native One the technological side, ICE-TheOREM implemented the Object Reuse and Exchange (ORE) protocol to integrate between a content management system, a thesis management system and multiple repository software packages and looked at ways to describe aggregate objects which include both data and documents, which can be generalized to domains other than chemistry. ICE-TheOREM has demonstrated how focusing on the use of the web architecture (including ORE) enables repository functions to be distributed between systems for complex, data-rich compound objects.UK Joint Information Systems Committee (JISC

    Fedora Goes to School: Experiences Creating a Curriculum Customization Service for K-12 Teachers

    Get PDF
    4th International Conference on Open RepositoriesThis presentation was part of the session : Fedora User Group PresentationsDate: 2009-05-20 01:30 PM – 03:00 PMEducational digital libraries provide a rich array of learning resources uniquely suited to support teachers to customize instruction. The problem we address is how to customize instruction to meet the learning needs of increasingly diverse student populations while ensuring that district learning goals and national and state standards are being met. This tension between supporting customization while supporting standards is further complicated by the challenges of scale: large urban school districts need technology infrastructure to support teachers district-wide to tailor curriculum, while still ensuring fidelity to learning goals. In partnership with Denver Public Schools (DPS), we are using open source digital library infrastructure available through the NSF-funded National Science Digital Library program to create a scalable Curriculum Customization Service. We are building on top of the Fedora-based NCore EduPak, which consists of the NSDL Collection System, the Digital Discovery System, and the NSDL Data Repository. DPS teachers will use this Service to (1) customize curriculum with digital library resources, formative assessments, and district-developed materials to aid student learning, (2) share their customizations as part of an online learning community and professional development program, and (3) discover, remix, and reuse other teachers' contributions. In this presentation, we will describe the Curriculum Customization Service and lessons learned from building an e-learning application supporting instructional planning and collaboration on top of Fedora. The Service uses learning goals as the central organizing concept of the interface. Organized around these are several curricular components including digital versions of the student textbook, digitized components of the associated teachers' guide (formative assessments, teaching tips, instructional resources, and background knowledge readings), and digital library resources. Digital library resources are further broken down by Top Picks (recommended), Images/Visuals, Animations, Additional Activities, and Working with Data. We will also present results from a 10 week pilot study with DPS middle and high school teachers (completed in Fall 2008) and plans for a large-scale, district-wide field study commencing in Fall 2009. In the pilot study, we used interviews, reflective essays, usage logs, and pop-up and email surveys to develop a detailed picture of how teachers were using the Service, and to examine how their usage of the Service changed over the course of the 10 week study. Results suggest the Service offers a powerful model for: (1) embedding digital library resources into mainstream teaching and learning practices and (2) enabling teachers to customize instruction to improve learner engagement and learning outcomes.NS

    Support for collaborative component-based software engineering

    Get PDF
    Collaborative system composition during design has been poorly supported by traditional CASE tools (which have usually concentrated on supporting individual projects) and almost exclusively focused on static composition. Little support for maintaining large distributed collections of heterogeneous software components across a number of projects has been developed. The CoDEEDS project addresses the collaborative determination, elaboration, and evolution of design spaces that describe both static and dynamic compositions of software components from sources such as component libraries, software service directories, and reuse repositories. The GENESIS project has focussed, in the development of OSCAR, on the creation and maintenance of large software artefact repositories. The most recent extensions are explicitly addressing the provision of cross-project global views of large software collections and historical views of individual artefacts within a collection. The long-term benefits of such support can only be realised if OSCAR and CoDEEDS are widely adopted and steps to facilitate this are described. This book continues to provide a forum, which a recent book, Software Evolution with UML and XML, started, where expert insights are presented on the subject. In that book, initial efforts were made to link together three current phenomena: software evolution, UML, and XML. In this book, focus will be on the practical side of linking them, that is, how UML and XML and their related methods/tools can assist software evolution in practice. Considering that nowadays software starts evolving before it is delivered, an apparent feature for software evolution is that it happens over all stages and over all aspects. Therefore, all possible techniques should be explored. This book explores techniques based on UML/XML and a combination of them with other techniques (i.e., over all techniques from theory to tools). Software evolution happens at all stages. Chapters in this book describe that software evolution issues present at stages of software architecturing, modeling/specifying, assessing, coding, validating, design recovering, program understanding, and reusing. Software evolution happens in all aspects. Chapters in this book illustrate that software evolution issues are involved in Web application, embedded system, software repository, component-based development, object model, development environment, software metrics, UML use case diagram, system model, Legacy system, safety critical system, user interface, software reuse, evolution management, and variability modeling. Software evolution needs to be facilitated with all possible techniques. Chapters in this book demonstrate techniques, such as formal methods, program transformation, empirical study, tool development, standardisation, visualisation, to control system changes to meet organisational and business objectives in a cost-effective way. On the journey of the grand challenge posed by software evolution, the journey that we have to make, the contributory authors of this book have already made further advances

    Big data: the potential role of research data management and research data registries

    Get PDF
    Universities generate and hold increasingly vast quantities of research data – both in the form of large, well-structured datasets but more often in the form of a long tail of small, distributed datasets which collectively amount to ‘Big Data’ and offer significant potential for reuse. However, unlike big data, these collections of small data are often less well curated and are usually very difficult to find thereby reducing their potential reuse value. The Digital Curation Centre (DCC) works to support UK universities to better manage and expose their research data so that its full value may be realised. With a focus on tapping into this long tail of small data, this presentation will cover two main DCC, services: DMPonline which helps researchers to identify potentially valuable research data and to plan for its longer-term retention and reuse; and the UK pilot research data registry and discovery service (RDRDS) which will help to ensure that research data produced in UK HEIs can be found, understood, and reused. Initially we will introduce participants to the role of data management planning to open up dialogue between researchers and library services to ensure potentially valuable research data are managed appropriately and made available for reuse where feasible. DMPs provide institutions with valuable insights into the scale of their data holdings, highlight any ethical and legal requirements that need to be met, and enable planning for dissemination and reuse. We will also introduce the DCC’s DMPonline, a tool to help researchers write DMPs, which can be customised by institutions and integrated with other systems to simplify and enhance the management and reuse of data. In the second part of the presentation we will focus on making selected research data more visible for reuse and explore the potential value of local and national research data registries. In particular we will highlight the Jisc-funded RDRDS pilot to establish a UK national service that aggregates metadata relating to data collections held in research institutions and subject data centres. The session will conclude by exploring some of the opportunities we may collaboratively explore in facilitating the management, aggregation and reuse of research data

    Project Hydra: Designing & Building a Reusable Framework for Multipurpose, Multifunction, Multi-institutional Repository-Powered Solutions

    Get PDF
    4th International Conference on Open RepositoriesThis presentation was part of the session : Fedora User Group PresentationsDate: 2009-05-20 03:30 PM – 05:00 PMThere is a clear business need in higher education for a flexible, reusable application framework that can support the rapid development of multiple systems tailored to distinct needs, but powered by a common underlying repository. Recognizing this common need, Stanford University, the University of Hull and the University of Virginia are collaborating on "Project Hydra", a three-year effort to create an application and middleware framework that, in combination with an underlying Fedora repository, will create a reusable environment for running multifunction, multipurpose repository-powered solutions. This paper details the collaborators' functional and technical design for such a framework, and will demonstrate the progress made to date on the initiative.JIS

    Web-based support for managing large collections of software artefacts

    Get PDF
    There has been a long history of CASE tool development, with an underlying software repository at the heart of most systems. Usually such tools, even the more recently web-based systems, are focused on supporting individual projects within an enterprise or across a number of distributed sites. Little support for maintaining large heterogeneous collections of software artefacts across a number of projects has been developed. Within the GENESIS project, this has been a key consideration in the development of the Open Source Component Artefact Repository (OSCAR). Its most recent extensions are explicitly addressing the provision of cross project global views of large software collections as well as historical views of individual artefacts within a collection. The long-term benefits of such support can only be realised if OSCAR is widely adopted and various steps to facilitate this are described
    • …
    corecore