57 research outputs found

    Toward a model for digital tool criticism: Reflection as integrative practice

    Get PDF
    In the past decade, an increasing set of digital tools has been developed with which digital sources can be selected, analyzed, and presented. Many tools go beyond key word search and perform different types of analysis, aggregation, mapping, and linking of data selections, which transforms materials and creates new perspectives, thereby changing the way scholars interact with and perceive their materials. These tools, together with the massive amount of digital and digitized data available for humanities research, put a strain on traditional humanities research methods. Currently, there is no established method of assessing the role of digital tools in the research trajectory of humanities scholars. There is no consensus on what questions researchers should ask themselves to evaluate digital sources beyond those of traditional analogue source criticism. This article aims to contribute to a better understanding of digital tools and the discussion of how to evaluate and incorporate them in research, based on findings from a digital tool criticism workshop held at the 2017 Digital Humanities Benelux conference. The overall goal of this article is to provide insight in the actual use and practice of digital tool criticism, offer a ready-made format for a workshop on digital tool criticism, give insight in aspects that play a role in digital tool criticism, propose an elaborate model for digital tool criticism that can be used as common ground for further conversations in the field, and finally, provide recommendations for future workshops, researchers, data custodians, and tool builders

    Interstitial Data: Tracing Metadata in Archival Search Systems

    Get PDF
    Metadata do not merely give explicit information about records in the archive but can also be considered a source of information about the (historical) context in which they are created. This chapter combines the insights of critical data studies and archival studies to formulate a hands-on approach to tracing metadata in archival search systems. The approach, which builds further on Loukissas’s local reading strategies, consists of two distinct phases: an exploration phase to trace and select and an analysis phase to trace and compare. The author concludes that a lot of data necessary to understanding metadata in search systems is hidden—dif ferent forms of what can be considered “interstitial data.

    Developing Data Stories as Enhanced Publications in Digital Humanities

    Get PDF
    This paper discusses the development of data-driven stories and the editorial processes underlying their production. Such ‘data stories’ have proliferated in journalism but are also increasingly developed within academia. Although ‘data stories’ lack a clear definition, there are similarities between the processes that underlie journalistic and academic data stories. However, there are also differences, specifically when it comes to epistemological claims. In this paper data stories as phenomenon and their use in journalism and in the Humanities form the context for the editorial protocol developed for CLARIAH Media Suite Data Stories

    Data Stories in CLARIAH: Developing a Research Infrastructure for Storytelling with Heritage and Culture Data

    Get PDF
    Online stories, from blog posts to journalistic articles to scientific publications, are commonly illustrated with media (e.g. images, audio clips) or statistical summaries (e.g. tables and graphs). Such “illustrations” are the result of a process of acquiring, parsing, filtering, mining, representing, refining and interacting with data [3]. Unfortunately, such processes are typically taken for granted and seldom mentioned in the story itself. Although recently a wide variety of interactive data visualisation techniques have been developed (see e.g., [6]), in many cases the illustrations in such publications are static; this prevents different audiences from engaging with the data and analyses as they desire. In this paper, we share our experiences with the concept of “data stories” that tackles both issues, enhancing opportunities for outreach, reporting on scientific inquiry, and FAIR data representation [9]. In journalism data stories are becoming widely accepted as the output of a process that is in many aspects similar to that of a computational scholar: gaining insights by analyzing data sets using (semi-)automatized methods and presenting these insights using (interactive) visualizations and other textual outputs based on data [4] [7] [5] [6]. In the context of scientific output, data stories can be regarded as digital “publications enriched with or linking to related research results, such as research data, workflows, software, and possibly connections among them” [1]. However, as infrastructure for (peerreviewed) enhanced publications is in an early stage of development (see e.g., [2]), scholarly data stories are currently often produced as blog posts, discussing a relevant topic. These may be accompanied by illustrations not limited to a single graph or image but characterized by different forms of interactivity: readers can, for instance, change the perspective or zoom level of graphs, or cycle through images or audio clips. Having experimented successfully with various types and uses of data stories1 in the CLARIAH2 project, we are working towards a more generic, stable and sustainable infrastructure to create, publish, and archive data stories. This includes providing environments for reproduction of data stories and verification of data via “close reading”. From an infrastructure perspective, this involves the provisioning of services for persistent storage of data (e.g. triple stores), data registration and search (registries), data publication (SPARQL end-points, search-APIs), data visualization, and (versioned) query creation. These services can be used by environments to develop data stories, either or not facilitating additional data analysis steps. For data stories that make use of data analysis, for example via Jupyter Notebooks [8], the infrastructure also needs to take computational requirements (load balancing) and restrictions (security) into account. Also, when data sets are restricted for copyright or privacy reasons, authentication and authorization infrastructure (AAI) is required. The large and rich data sets in (European) heritage archives that are increasingly made interoperable using FAIR principles, are eminently qualified as fertile ground for data stories. We therefore hope to be able to present our experiences with data stories, share our strategy for a more generic solution and receive feedback on shared challenges

    Febrile seizures

    Get PDF
    Contains fulltext : 54812.pdf (publisher's version ) (Open Access)K. Biegert News on the other. Tracing identity in Scandinavian constructions of the eastern Baltic Sea region : ,2005 nogdoe

    Tutorial: Reconstructing the Genealogy of a TV-Clip

    No full text
    A detailed tutorial made by Jasmijn van Gorp (Utrecht University) about reconstructing the genealogy of a TV-clip in the CLARIAH Media Suite. The different steps in this tutorial focus on rebroadcasts and reuse, as well as on archival transformations of the record, and will therefore help the reader obtain basic ‘video forensic’ skills that can help in assessing a video’s origins
    corecore