4,024 research outputs found

    Towards a service-oriented e-infrastructure for multidisciplinary environmental research

    Get PDF
    Research e-infrastructures are considered to have generic and thematic parts. The generic part provids high-speed networks, grid (large-scale distributed computing) and database systems (digital repositories and data transfer systems) applicable to all research commnities irrespective of discipline. Thematic parts are specific deployments of e-infrastructures to support diverse virtual research communities. The needs of a virtual community of multidisciplinary envronmental researchers are yet to be investigated. We envisage and argue for an e-infrastructure that will enable environmental researchers to develop environmental models and software entirely out of existing components through loose coupling of diverse digital resources based on the service-oriented achitecture. We discuss four specific aspects for consideration for a future e-infrastructure: 1) provision of digital resources (data, models & tools) as web services, 2) dealing with stateless and non-transactional nature of web services using workflow management systems, 3) enabling web servce discovery, composition and orchestration through semantic registries, and 4) creating synergy with existing grid infrastructures

    Publishing Primary Data on the World Wide Web: Opencontext.org and an Open Future for the Past

    Get PDF
    More scholars are exploring forms of digital dissemination, including open access (OA) systems where content is made available free of charge. These include peer -reviewed e -journals as well as traditional journals that have an online presence. Besides SHA's Technical Briefs in Historical Archaeology, the American Journal of Archaeology now offers open access to downloadable articles from their printed issues. Similarly, Evolutionary Anthropology offers many full -text articles free for download. More archaeologists are also taking advantage of easy Web publication to post copies of their publications on personal websites. Roughly 15% of all scholars participate in such "self -archiving." To encourage this practice, Science Commons (2006) and the Scholarly Publishing and Academic Resources Coalition (SPARC) recently launched the Scholar Copyright Project, an initiative that will develop standard "Author Addenda" -- a suite of short amendments to attach to copyright agreements from publishers (http://sciencecommons. org/projects/publishing/index.html). These addenda make it easier for paper authors to retain and clarify their rights to self -archive their papers electronically. Several studies now clearly document that self -archiving and OA publication enhances uptake and citation rates (Hajjem et al. 2005). Researchers enhance their reputations and stature by opening up their scholarship.Mounting pressure for greater public access also comes from many research stakeholders. Granting foundations interested in maximizing the return on their investment in basic research are often encouraging and sometimes even requiring some form of OA electronic dissemination. Interest in maximizing public access to publicly financed research is catching on in Congress. A new bipartisan bill, the Federal Research Public Access Act, would require OA for drafts of papers that pass peer review and result from federally funded research (U.S. Congress 2006). The bill would create government -funded digital repositories that would host and maintain these draft papers. University libraries are some of the most vocal advocates for OA research. Current publishing frameworks have seen dramatically escalated costs, sometimes four times higher than the general rate of inflation (Create Change 2003). Increasing costs have forced many libraries to cancel subscriptions and thereby hurt access and scholarship (Association for College and Research Libraries 2003; Suber 2004).This article originally published in Technical Briefs In Historical Archaeology, 2007, 2: -11

    Digital curation and the cloud

    Get PDF
    Digital curation involves a wide range of activities, many of which could benefit from cloud deployment to a greater or lesser extent. These range from infrequent, resource-intensive tasks which benefit from the ability to rapidly provision resources to day-to-day collaborative activities which can be facilitated by networked cloud services. Associated benefits are offset by risks such as loss of data or service level, legal and governance incompatibilities and transfer bottlenecks. There is considerable variability across both risks and benefits according to the service and deployment models being adopted and the context in which activities are performed. Some risks, such as legal liabilities, are mitigated by the use of alternative, e.g., private cloud models, but this is typically at the expense of benefits such as resource elasticity and economies of scale. Infrastructure as a Service model may provide a basis on which more specialised software services may be provided. There is considerable work to be done in helping institutions understand the cloud and its associated costs, risks and benefits, and how these compare to their current working methods, in order that the most beneficial uses of cloud technologies may be identified. Specific proposals, echoing recent work coordinated by EPSRC and JISC are the development of advisory, costing and brokering services to facilitate appropriate cloud deployments, the exploration of opportunities for certifying or accrediting cloud preservation providers, and the targeted publicity of outputs from pilot studies to the full range of stakeholders within the curation lifecycle, including data creators and owners, repositories, institutional IT support professionals and senior manager

    LORE: A Compound Object Authoring and Publishing Tool for Literary Scholars based on the FRBR

    Get PDF
    4th International Conference on Open RepositoriesThis presentation was part of the session : Conference PresentationsDate: 2009-06-04 10:30 AM – 12:00 PMThis paper presents LORE (Literature Object Re-use and Exchange), a light-weight tool designed to enable scholars and teachers of literature to author, edit and publish OAI-ORE-compliant compound information objects that encapsulate related digital resources and bibliographic records. LORE provides a graphical user interface for creating, labelling and visualizing typed relationships between individual objects using terms from a bibliographic ontology based on the IFLA FRBR. After creating a compound object, users can attach metadata and publish it to a Fedora repository (as an RDF graph) where it can be searched, retrieved, edited and re-used by others. LORE has been developed in the context of the Australian Literature Resource project (AustLit) and hence focuses on compound objects for teaching and research within the Australian literature studies community.NCRIS National eResearch Architecture Taskforce (NeAT

    Interoperability and FAIRness through a novel combination of Web technologies

    Get PDF
    Data in the life sciences are extremely diverse and are stored in a broad spectrum of repositories ranging from those designed for particular data types (such as KEGG for pathway data or UniProt for protein data) to those that are general-purpose (such as FigShare, Zenodo, Dataverse or EUDAT). These data have widely different levels of sensitivity and security considerations. For example, clinical observations about genetic mutations in patients are highly sensitive, while observations of species diversity are generally not. The lack of uniformity in data models from one repository to another, and in the richness and availability of metadata descriptions, makes integration and analysis of these data a manual, time-consuming task with no scalability. Here we explore a set of resource-oriented Web design patterns for data discovery, accessibility, transformation, and integration that can be implemented by any general- or special-purpose repository as a means to assist users in finding and reusing their data holdings. We show that by using off-the-shelf technologies, interoperability can be achieved atthe level of an individual spreadsheet cell. We note that the behaviours of this architecture compare favourably to the desiderata defined by the FAIR Data Principles, and can therefore represent an exemplar implementation of those principles. The proposed interoperability design patterns may be used to improve discovery and integration of both new and legacy data, maximizing the utility of all scholarly outputs

    The NorduGrid architecture and tools

    Full text link
    The NorduGrid project designed a Grid architecture with the primary goal to meet the requirements of production tasks of the LHC experiments. While it is meant to be a rather generic Grid system, it puts emphasis on batch processing suitable for problems encountered in High Energy Physics. The NorduGrid architecture implementation uses the \globus{} as the foundation for various components, developed by the project. While introducing new services, the NorduGrid does not modify the Globus tools, such that the two can eventually co-exist. The NorduGrid topology is decentralized, avoiding a single point of failure. The NorduGrid architecture is thus a light-weight, non-invasive and dynamic one, while robust and scalable, capable of meeting most challenging tasks of High Energy Physics.Comment: Talk from the 2003 Computing in High Energy Physics and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 9 pages,LaTeX, 4 figures. PSN MOAT00

    A Virtual Observatory Vision based on Publishing and Virtual Data

    Get PDF
    We would like to propose a vision of the Virtual Observatory where the "killer-app" is seen to be generalizing and extending the idea of "publication" from the narrow meaning of peer-reviewed journals. Here, publication ranges from private temporary storage, to group access, to public access, through to data that supports peer-reviewed Journal papers in perpetuity. The publication model is further extended by the possibility of Virtual Data -- where only the method of computation is stored, not necessarily the data itself. Furthermore, virtual data products may depend on other virtual data products, creating an implicit network of on-demand computation. This computation may take huge resources, or it may be all within a laptop

    cRIsp: Crowdsourcing Representation Information to Support Preservation

    Get PDF
    In this paper, we describe a new collaborative approach to the collection of representation information to ensure long term access to digital content. Representation information is essential for successful rendering of digital content in the future. Manual collection and maintenance of representation information has so far proven to be highly resource intensive and is compounded by the massive scale of the challenge, especially for repositories with no format limitations. This solution combats these challenges by drawing upon the wisdom and knowledge of the crowd to identify online sources of representation information, which are then collected, classified, and managed using existing tools. We suggest that nominations can be harvested and preserved by participating established web archives, which themselves could obviously benefit from such extensive collections. This is a low cost, low resource approach to collecting essential representation information of widespread relevance

    Software Citation Implementation Challenges

    Get PDF
    The main output of the FORCE11 Software Citation working group (https://www.force11.org/group/software-citation-working-group) was a paper on software citation principles (https://doi.org/10.7717/peerj-cs.86) published in September 2016. This paper laid out a set of six high-level principles for software citation (importance, credit and attribution, unique identification, persistence, accessibility, and specificity) and discussed how they could be used to implement software citation in the scholarly community. In a series of talks and other activities, we have promoted software citation using these increasingly accepted principles. At the time the initial paper was published, we also provided guidance and examples on how to make software citable, though we now realize there are unresolved problems with that guidance. The purpose of this document is to provide an explanation of current issues impacting scholarly attribution of research software, organize updated implementation guidance, and identify where best practices and solutions are still needed
    corecore