6,294 research outputs found

    Interoperability and FAIRness through a novel combination of Web technologies

    Get PDF
    Data in the life sciences are extremely diverse and are stored in a broad spectrum of repositories ranging from those designed for particular data types (such as KEGG for pathway data or UniProt for protein data) to those that are general-purpose (such as FigShare, Zenodo, Dataverse or EUDAT). These data have widely different levels of sensitivity and security considerations. For example, clinical observations about genetic mutations in patients are highly sensitive, while observations of species diversity are generally not. The lack of uniformity in data models from one repository to another, and in the richness and availability of metadata descriptions, makes integration and analysis of these data a manual, time-consuming task with no scalability. Here we explore a set of resource-oriented Web design patterns for data discovery, accessibility, transformation, and integration that can be implemented by any general- or special-purpose repository as a means to assist users in finding and reusing their data holdings. We show that by using off-the-shelf technologies, interoperability can be achieved atthe level of an individual spreadsheet cell. We note that the behaviours of this architecture compare favourably to the desiderata defined by the FAIR Data Principles, and can therefore represent an exemplar implementation of those principles. The proposed interoperability design patterns may be used to improve discovery and integration of both new and legacy data, maximizing the utility of all scholarly outputs

    Pathways: Augmenting interoperability across scholarly repositories

    Full text link
    In the emerging eScience environment, repositories of papers, datasets, software, etc., should be the foundation of a global and natively-digital scholarly communications system. The current infrastructure falls far short of this goal. Cross-repository interoperability must be augmented to support the many workflows and value-chains involved in scholarly communication. This will not be achieved through the promotion of single repository architecture or content representation, but instead requires an interoperability framework to connect the many heterogeneous systems that will exist. We present a simple data model and service architecture that augments repository interoperability to enable scholarly value-chains to be implemented. We describe an experiment that demonstrates how the proposed infrastructure can be deployed to implement the workflow involved in the creation of an overlay journal over several different repository systems (Fedora, aDORe, DSpace and arXiv).Comment: 18 pages. Accepted for International Journal on Digital Libraries special issue on Digital Libraries and eScienc

    Issues in digital preservation: towards a new research agenda

    Get PDF
    Digital Preservation has evolved into a specialized, interdisciplinary research discipline of its own, seeing significant increases in terms of research capacity, results, but also challenges. However, with this specialization and subsequent formation of a dedicated subgroup of researchers active in this field, limitations of the challenges addressed can be observed. Digital preservation research may seem to react to problems arising, fixing problems that exist now, rather than proactively researching new solutions that may be applicable only after a few years of maturing. Recognising the benefits of bringing together researchers and practitioners with various professional backgrounds related to digital preservation, a seminar was organized in Schloss Dagstuhl, at the Leibniz Center for Informatics (18-23 July 2010), with the aim of addressing the current digital preservation challenges, with a specific focus on the automation aspects in this field. The main goal of the seminar was to outline some research challenges in digital preservation, providing a number of "research questions" that could be immediately tackled, e.g. in Doctoral Thesis. The seminar intended also to highlight the need for the digital preservation community to reach out to IT research and other research communities outside the immediate digital preservation domain, in order to jointly develop solutions

    Geospatial information infrastructures

    Get PDF
    Manual of Digital Earth / Editors: Huadong Guo, Michael F. Goodchild, Alessandro Annoni .- Springer, 2020 .- ISBN: 978-981-32-9915-3Geospatial information infrastructures (GIIs) provide the technological, semantic,organizationalandlegalstructurethatallowforthediscovery,sharing,and use of geospatial information (GI). In this chapter, we introduce the overall concept and surrounding notions such as geographic information systems (GIS) and spatial datainfrastructures(SDI).WeoutlinethehistoryofGIIsintermsoftheorganizational andtechnologicaldevelopmentsaswellasthecurrentstate-of-art,andreflectonsome of the central challenges and possible future trajectories. We focus on the tension betweenincreasedneedsforstandardizationandtheever-acceleratingtechnological changes. We conclude that GIIs evolved as a strong underpinning contribution to implementation of the Digital Earth vision. In the future, these infrastructures are challengedtobecomeflexibleandrobustenoughtoabsorbandembracetechnological transformationsandtheaccompanyingsocietalandorganizationalimplications.With this contribution, we present the reader a comprehensive overview of the field and a solid basis for reflections about future developments

    Revising the ISSN: Involving stakeholders to adapt a bibliographic standard to its ever-changing environment

    Get PDF
    International audienceThe International Standard Serial Number is one of the oldest identifiers in the bibliographic domain, and also one of the most widely used and known. It was first established as an ISO standard in 1975, as ISO 3297. Originally intended for printed serials, the ISSN standard has been able over time to evolve in order to meet the needs of its users. It has known four revisions since its first release, the latest being in 2007. ISSN are now applicable to serials and to other continuing resources, whether past, present or to be published or produced in the foreseeable future, whatever the medium of publication or production. Each ISO standard regularly undergoes systematic revisions. In April 2016, a vote on the opportunity of a systematic revision was issued by TC46/SC9 to all ISO member bodies. From April to September 2016, they will vote to support or not a complete review of the ISSN standard. Evolution of publishing models, forms and formats; emergence of new concepts and description standards in the library and publishing domains; apparition of new identifiers; and the development of linked data are the most important factors that justify why a revision of the ISSN standard should happen

    The Analysis of Big Data on Cites and Regions - Some Computational and Statistical Challenges

    Get PDF
    Big Data on cities and regions bring new opportunities and challenges to data analysts and city planners. On the one side, they hold great promise to combine increasingly detailed data for each citizen with critical infrastructures to plan, govern and manage cities and regions, improve their sustainability, optimize processes and maximize the provision of public and private services. On the other side, the massive sample size and high-dimensionality of Big Data and their geo-temporal character introduce unique computational and statistical challenges. This chapter provides overviews on the salient characteristics of Big Data and how these features impact on paradigm change of data management and analysis, and also on the computing environment.Series: Working Papers in Regional Scienc
    corecore