43 research outputs found

    Applying the Canonical Text Services Model to the Coptic SCRIPTORIUM

    Get PDF
    Coptic SCRIPTORIUM is a platform for interdisciplinary and computational research in Coptic texts and linguistics. The purpose of this project was to research and implement a system of stable identification for the texts and linguistic data objects in Coptic SCRIPTORIUM to facilitate their citation and reuse. We began the project with a preferred solution, the Canonical Text Services URN model, which we validated for suitability for the corpus and compared it to other approaches, including HTTP URLs and Handles. The process of applying the CTS model to Coptic SCRIPTORIUM required an in-depth analysis that took into account the domain-specific scholarly research and citation practices, the structure of the textual data, and the data management workflow

    Integrating Stakeholder Input into Water Policy Development and Analysis

    Get PDF
    Agricultural water use is becoming an issue in much of the South due to population growth. Results of projects evaluating the impacts of conservation strategies aimed at reallocating or extending the life of water supplies are being met with great skepticism by stakeholder groups. In order to gain acceptance of results, it is essential that stakeholder groups be involved from the beginning in the identification of potential water conservation strategies and be kept informed throughout the project. The objective of this paper is to review previous attempts at involving stakeholders and the methodology currently being employed in the Ogallala Aquifer Project.conservation, Ogallala Aquifer, stakeholder, water policy, Agribusiness, Agricultural and Food Policy, Consumer/Household Economics, Q250, Q280,

    Building a Disciplinary, World-Wide Data Infrastructure

    Full text link
    Sharing scientific data, with the objective of making it fully discoverable, accessible, assessable, intelligible, usable, and interoperable, requires work at the disciplinary level to define in particular how the data should be formatted and described. Each discipline has its own organization and history as a starting point, and this paper explores the way a range of disciplines, namely materials science, crystallography, astronomy, earth sciences, humanities and linguistics get organized at the international level to tackle this question. In each case, the disciplinary culture with respect to data sharing, science drivers, organization and lessons learnt are briefly described, as well as the elements of the specific data infrastructure which are or could be shared with others. Commonalities and differences are assessed. Common key elements for success are identified: data sharing should be science driven; defining the disciplinary part of the interdisciplinary standards is mandatory but challenging; sharing of applications should accompany data sharing. Incentives such as journal and funding agency requirements are also similar. For all, it also appears that social aspects are more challenging than technological ones. Governance is more diverse, and linked to the discipline organization. CODATA, the RDA and the WDS can facilitate the establishment of disciplinary interoperability frameworks. Being problem-driven is also a key factor of success for building bridges to enable interdisciplinary research.Comment: Proceedings of the session "Building a disciplinary, world-wide data infrastructure" of SciDataCon 2016, held in Denver, CO, USA, 12-14 September 2016, to be published in ICSU CODATA Data Science Journal in 201

    The Linked Fragment: TEI and the encoding of text reuses of lost authors

    Get PDF
    This paper presents a joint project of the Humboldt Chair of Digital Humanities at the University of Leipzig, the Perseus Digital Library at Tufts University, and the Harvard Center for Hellenic Studies to produce a new open series of Greek and Latin fragmentary authors. Such authors are lost and their works are preserved only thanks to quotations and text reuses in later texts. The project is undertaking two tasks: (1) the digitization of paper editions of fragmentary works with links to the source texts from which the fragments have been extracted; (2) the production of born-digital editions of fragmentary works. The ultimate goals are the creation of open, linked, machine-actionable texts for the study and advancement of the field of Classical textual fragmentary heritage and the development of a collaborative environment for crowdsourced annotations. These goals are being achieved by implementing the Perseids Platform and by encoding the Fragmenta Historicorum Graecorum, one of the most important and comprehensive collections of fragmentary authors

    Distributed Text Services (DTS): A Community-Built API to Publish and Consume Text Collections as Linked Data

    Get PDF
    This paper presents the Distributed Text Service (DTS) API Specification, a community-built effort to facilitate the publication and consumption of texts and their structures as Linked Data. DTS was designed to be as generic as possible, providing simple operations for navigating collections, navigating within a text, and retrieving textual content. While the DTS API uses JSON-LD as the serialization format for non-textual data (e.g., descriptive metadata), TEI XML was chosen as the minimum required format for textual data served by the API in order to guarantee the interoperability of data published by DTS-compliant repositories. This paper describes the DTS API specifications by means of real-world examples, discusses the key design choices that were made, and concludes by providing a list of existing repositories and libraries that support DTS

    Perseids: Experimenting with Infrastructure for Creating and Sharing Research Data in the Digital Humanities

    No full text
    The Perseids project provides a platform for creating, publishing, and sharing research data, in the form of textual transcriptions, annotations and analyses. An offshoot and collaborator of the Perseus Digital Library (PDL), Perseids is also an experiment in reusing and extending existing infrastructure, tools, and services. This paper discusses infrastructure in the domain of digital humanities (DH). It outlines some general approaches to facilitating data sharing in this domain, and the specific choices we made in developing Perseids to serve that goal. It concludes by identifying lessons we have learned about sustainability in the process of building Perseids, noting some critical gaps in infrastructure for the digital humanities, and suggesting some implications for the wider community

    Capitains/Nautilus: 1.0.0

    No full text
    Implementation of a local CTS5 endpoint for MyCapytain

    flask-capitains-nemo: 0.0.2

    No full text
    CTS UI for flas

    Capitains/Sparrow: Release 1.2.2

    No full text
    Adds support for semicolon_delimiter option to llt segtok service see https://github.com/perseids-project/llt-segmenter/issues/

    Applying the Canonical Text Services Model to the Coptic SCRIPTORIUM

    No full text
    Coptic SCRIPTORIUM is a platform for interdisciplinary and computational research in Coptic texts and linguistics. The purpose of this project was to research and implement a system of stable identification for the texts and linguistic data objects in Coptic SCRIPTORIUM to facilitate their citation and reuse. We began the project with a preferred solution, the Canonical Text Services URN model, which we validated for suitability for the corpus and compared it to other approaches, including HTTP URLs and Handles. The process of applying the CTS model to Coptic SCRIPTORIUM required an in-depth analysis that took into account the domain-specific scholarly research and citation practices, the structure of the textual data, and the data management workflow
    corecore