30 research outputs found

    What Are You Trying to Say? The Interface as an Integral Element of Argument

    Get PDF
    Graphical interfaces to digital scholarly editions are usually regarded as disconnected from the content of the edition, enough so that an argument has developed against the use of interfaces at all. We argue in this paper that the indifference and even hostility to interfaces is caused by a widespread incomprehension of their argumentative utility. In a pair of case studies of published digital editions, we conduct a detailed examination of the argument their interface makes, and compare these interface rhetorics with the stated intentions of the editors, exposing a number of contradictions between ‘word’ and ‘deed’ in the interface designs. We end by advocating for an explicit consideration of the semiotic significance of the elements of a user interface: that editors reflect on what aspect of the argument their interface expresses, and how that is adding, or perhaps subtracting, from the points they wish to make

    Teacher's corner : evaluating informative hypotheses using the Bayes factor in structural equation models

    Get PDF
    This Teacher's Corner paper introduces Bayesian evaluation of informative hypotheses for structural equation models, using the free open-source R packages bain, for Bayesian informative hypothesis testing, and lavaan, a widely used SEM package. The introduction provides a brief non-technical explanation of informative hypotheses, the statistical underpinnings of Bayesian hypothesis evaluation, and the bain algorithm. Three tutorial examples demonstrate informative hypothesis evaluation in the context of common types of structural equation models: 1) confirmatory factor analysis, 2) latent variable regression, and 3) multiple group analysis. We discuss hypothesis formulation, the interpretation of Bayes factors and posterior model probabilities, and sensitivity analysis

    Exploring data provenance in handwritten text recognition infrastructure:Sharing and reusing ground truth data, referencing models, and acknowledging contributions. Starting the conversation on how we could get it done

    Get PDF
    This paper discusses best practices for sharing and reusing Ground Truth in Handwritten Text Recognition infrastructures, and ways to reference and acknowledge contributions to the creation and enrichment of data within these Machine Learning systems. We discuss how one can publish Ground Truth data in a repository and, subsequently, inform others. Furthermore, we suggest appropriate citation methods for HTR data, models, and contributions made by volunteers. Moreover, when using digitised sources (digital facsimiles), it becomes increasingly important to distinguish between the physical object and the digital collection. These topics all relate to the proper acknowledgement of labour put into digitising, transcribing, and sharing Ground Truth HTR data. This also points to broader issues surrounding the use of Machine Learning in archival and library contexts, and how the community should begin toacknowledge and record both contributions and data provenance

    Exploring Data Provenance in Handwritten Text Recognition Infrastructure: Sharing and Reusing Ground Truth Data, Referencing Models, and Acknowledging Contributions. Starting the Conversation on How We Could Get It Done

    Get PDF
    This paper discusses best practices for sharing and reusing Ground Truth in Handwritten Text Recognition infrastructures, as well as ways to reference and acknowledge contributions to the creation and enrichment of data within these systems. We discuss how one can place Ground Truth data in a repository and, subsequently, inform others through HTR-United. Furthermore, we want to suggest appropriate citation methods for ATR data, models, and contributions made by volunteers. Moreover, when using digitised sources (digital facsimiles), it becomes increasingly important to distinguish between the physical object and the digital collection. These topics all relate to the proper acknowledgement of labour put into digitising, transcribing, and sharing Ground Truth HTR data. This also points to broader issues surrounding the use of machine learning in archival and library contexts, and how the community should begin to acknowledge and record both contributions and data provenance

    The case of the bold button: Social shaping of technology and the digital scholarly edition

    No full text
    The role and usage of a certain technology is not imparted wholesale on the intended user community—technology is not deterministic. Rather, a negotiation between users and the designers of the technology will result in its particular form and function. This article considers a side effect of these negotiations. When a certain known technology is used to convey a new technological concept or model, there is a risk that the paradigm associated by the users with the known technology will eclipse the new model and its affordances in part or in whole. The article presents a case study of this ‘paradigmatic regression’ centering on a transcription tool of the Huygens Institute in the Netherlands. It is argued that similar effects also come into play at a larger scale within the field of textual scholarship, inhibiting the exploration of the affordances of new models that do not adhere to the pervasive digital metaphor of the codex. An example of such an innovative model, the knowledge graph model, is briefly introduced to illustrate the point

    On Not Writing a Review About Mirador: Mirador, IIIF, and the Epistemological Gains of Distributed Digital Scholarly Resources

    Get PDF
    This piece mushroomed from a simple enough looking suggestion to write a review about Mirador, a viewer component for web based image resources. While playing around and testing Mirador however, a lot of questions started to emerge–questions that in a scholarly sense were more significant than just the functional requirements of textual scholars and researchers of medieval sources for an image viewer. These questions are forced upon us because of the way Mirador is built, and by the assumptions it thereby makes–or that its developers make–about its role and about the larger infrastructure for scholarly resources that it is supposed to be a part of. This again led to a number of epistemological issues in the realm of digital textual scholarship. And so, what was intended as a simple review resulted in a long read about Mirador, about its technological context, and about digital scholarly editions as distributed resources. The first part of my story gives a straightforward review-like overview of Mirador. I then delve into the reasons that I think exist for the architectural nature of the majority of current digital scholarly editions, which are still mostly monolithic data silos. This in turn leads to some epistemological questions about digital scholarly editions. Subsequently I return to Mirador to investigate whether its architectural assumptions provide an answer to these epistemological issues. To estimate whether the epistemological “promise” that Mirador’s architecture holds may be easily attained, I gauge what (technical) effort is associated with building a digital edition that actually utilizes Mirador. Integrating Mirador also implies adopting the emerging standard IIIF (international image interoperability framework); a discussion of this “standard-to-be” is therefore in order. Finally the article considers the prospects of aligning the IIIF and TEI “standards” to further the creation of distributed digital scholarly editions

    What is Textual Scholarship and What is Not?

    No full text

    Author, Editor, Engineer — Code & the Rewriting of Authorship in Scholarly Editing

    No full text
    This article examines the relation of software creation to scholarship, particularly within the domain of textual scholarship and the creation of (digital) scholarly editions. To this end, both scholarly editing and the creation of software are considered with regard to the individual relationship they have to the concept of authorship. I argue that both are in fact forms of revisionary authorship, and that they are scholarly in so far as they serve to present an expression of a text that can be taken as an argument about the interpretation of that text. In addition software's performative aspect allows it to rewrite itself and other textual expressions; its application rewrites the very process of textual scholarship. Because of its scholarly ramifications the creation of scholarly argument and expressions of editions by means of code should be claimed as scholarly work by its authors, i.e. programmers. Without proper appropriation the accountability for scholarly process becomes problematic

    Apparatus vs. Graph: New Models and Interfaces for Text

    No full text
    corecore