50 research outputs found

    Applying the Canonical Text Services Model to the Coptic SCRIPTORIUM

    Get PDF
    Coptic SCRIPTORIUM is a platform for interdisciplinary and computational research in Coptic texts and linguistics. The purpose of this project was to research and implement a system of stable identification for the texts and linguistic data objects in Coptic SCRIPTORIUM to facilitate their citation and reuse. We began the project with a preferred solution, the Canonical Text Services URN model, which we validated for suitability for the corpus and compared it to other approaches, including HTTP URLs and Handles. The process of applying the CTS model to Coptic SCRIPTORIUM required an in-depth analysis that took into account the domain-specific scholarly research and citation practices, the structure of the textual data, and the data management workflow

    Integrating Stakeholder Input into Water Policy Development and Analysis

    Get PDF
    Agricultural water use is becoming an issue in much of the South due to population growth. Results of projects evaluating the impacts of conservation strategies aimed at reallocating or extending the life of water supplies are being met with great skepticism by stakeholder groups. In order to gain acceptance of results, it is essential that stakeholder groups be involved from the beginning in the identification of potential water conservation strategies and be kept informed throughout the project. The objective of this paper is to review previous attempts at involving stakeholders and the methodology currently being employed in the Ogallala Aquifer Project.conservation, Ogallala Aquifer, stakeholder, water policy, Agribusiness, Agricultural and Food Policy, Consumer/Household Economics, Q250, Q280,

    Building a Disciplinary, World-Wide Data Infrastructure

    Full text link
    Sharing scientific data, with the objective of making it fully discoverable, accessible, assessable, intelligible, usable, and interoperable, requires work at the disciplinary level to define in particular how the data should be formatted and described. Each discipline has its own organization and history as a starting point, and this paper explores the way a range of disciplines, namely materials science, crystallography, astronomy, earth sciences, humanities and linguistics get organized at the international level to tackle this question. In each case, the disciplinary culture with respect to data sharing, science drivers, organization and lessons learnt are briefly described, as well as the elements of the specific data infrastructure which are or could be shared with others. Commonalities and differences are assessed. Common key elements for success are identified: data sharing should be science driven; defining the disciplinary part of the interdisciplinary standards is mandatory but challenging; sharing of applications should accompany data sharing. Incentives such as journal and funding agency requirements are also similar. For all, it also appears that social aspects are more challenging than technological ones. Governance is more diverse, and linked to the discipline organization. CODATA, the RDA and the WDS can facilitate the establishment of disciplinary interoperability frameworks. Being problem-driven is also a key factor of success for building bridges to enable interdisciplinary research.Comment: Proceedings of the session "Building a disciplinary, world-wide data infrastructure" of SciDataCon 2016, held in Denver, CO, USA, 12-14 September 2016, to be published in ICSU CODATA Data Science Journal in 201

    Distributed Text Services (DTS): A Community-Built API to Publish and Consume Text Collections as Linked Data

    Get PDF
    This paper presents the Distributed Text Service (DTS) API Specification, a community-built effort to facilitate the publication and consumption of texts and their structures as Linked Data. DTS was designed to be as generic as possible, providing simple operations for navigating collections, navigating within a text, and retrieving textual content. While the DTS API uses JSON-LD as the serialization format for non-textual data (e.g., descriptive metadata), TEI XML was chosen as the minimum required format for textual data served by the API in order to guarantee the interoperability of data published by DTS-compliant repositories. This paper describes the DTS API specifications by means of real-world examples, discusses the key design choices that were made, and concludes by providing a list of existing repositories and libraries that support DTS

    Perseids: Experimenting with Infrastructure for Creating and Sharing Research Data in the Digital Humanities

    No full text
    The Perseids project provides a platform for creating, publishing, and sharing research data, in the form of textual transcriptions, annotations and analyses. An offshoot and collaborator of the Perseus Digital Library (PDL), Perseids is also an experiment in reusing and extending existing infrastructure, tools, and services. This paper discusses infrastructure in the domain of digital humanities (DH). It outlines some general approaches to facilitating data sharing in this domain, and the specific choices we made in developing Perseids to serve that goal. It concludes by identifying lessons we have learned about sustainability in the process of building Perseids, noting some critical gaps in infrastructure for the digital humanities, and suggesting some implications for the wider community

    Using the RDA Collections API to Shape Humanities Data

    No full text

    Capitains/Nautilus: 1.0.0

    No full text
    Implementation of a local CTS5 endpoint for MyCapytain

    flask-capitains-nemo: 0.0.2

    No full text
    CTS UI for flas

    Capitains/Sparrow: Release 1.2.2

    No full text
    Adds support for semicolon_delimiter option to llt segtok service see https://github.com/perseids-project/llt-segmenter/issues/
    corecore