9 research outputs found

    Panopticism and Complicity: The State of Surveillance and Everyday Oppression in Libraries, Archives, and Museums

    Get PDF
    Historically, libraries, archives, and museums—or LAM institutions—have been complicit in enacting state power by surveilling and policing communities. This article broadens previous scholars’ critiques about individual institutions to LAM institutions writ large, drawing connections between these sites and ongoing racist, classist, and oppressive designs. We do so by dialing in on the ethical premise that justifies panoptic systems, utilitarianism, and how the glorification of pragmatism reifies systems of control and oppression. First, we revisit LIS applications of Benthamian and Foucauldian ideas of panoptic power to examine the role of LAM institutions as sites of social enmity. We then describe examples of surveillance and state power as they manifest in contemporary data infrastructure and information practices, showing how LAM institutional fixations with utilitarianism reify the U.S. carceral state through norms such as the aggregation and weaponization of user data and the overreliance on metrics. We argue that such practices are akin to widespread systems of surveillance and criminalization. Finally, we reflect on how LAM workers can combat structures that rely on oppressive assumptions and claims to information authority. Pre-print first published online February 10, 202

    Moving Beyond Text Digitization in Archives Using Both Human and Technological Resources

    Get PDF
    In the past, the digitization of archival collections has focused on capture of and access to plain images of textual material. In the current cultural heritage environment, particularly with the shift of many archives workers and patrons to telework during the COVID-19 pandemic, an image of the archival object alone is not enough. Today archival collections need to be searchable and transcribable. This session discusses the power of both technological developments and more traditional humanistic "people power" to enhance digitized archival collections at scale. Come and hear how archives are crowdsourcing transcriptions of digitized texts, both through "volunpeers" from the public and teleworking staff, and using informatics tools for new methods of seeing and understanding collections such as OCR and artificial intelligence (AI)

    Cloistered Voices: English Nuns in Exile, 1550-1800

    Get PDF

    Expressing selfhood in the convent: anonymous chronicling and subsumed autobiography

    No full text
    Convent autobiography took many forms. We find it in conversion narratives and vidas por mandato, as well as in less obvious places, including chronicles, trans-lations, poetry, saints’ lives and the myriad forms of governance documents that structured convent life. Sometimes nuns wrote under their own names, but frequently they composed anonymously. How do we locate autobiographical acts within anonymous texts? This article proposes a new genre called ‘subsumed autobiography’ to describe anonymously composed texts whose authors shape and influence their work around themes that grow out of their personal interests, theology, politics and so on. It analyses the authorial strategies deployed by the first chronicler of the English Augustinian community of St Monica's (Louvain), and pays particular attention to the themes of Catholic education, Latinity, and the legacy of Sir Thomas More. This work is predicated on an earlier article in which the anonymous author of the chronicle was identified as Mary Copley (1591/2–1669)

    Individual vs. Collaborative Methods of Crowdsourced Transcription

    Get PDF
    While online crowdsourced text transcription projects have proliferated in the last decade, there is a need within the broader field to understand differences in project outcomes as they relate to task design, as well as to experiment with different models of online crowdsourced transcription that have not yet been explored. The experiment discussed in this paper involves the evaluation of newly-built tools on the Zooniverse.org crowdsourcing platform, attempting to answer the research question: “Does the current Zooniverse methodology of multiple independent transcribers and aggregation of results render higher-quality outcomes than allowing volunteers to see previous transcriptions and/or markings by other users? How does each methodology impact the quality and depth of analysis and participation?” To answer these questions, the Zooniverse team ran an A/B experiment on the project Anti-Slavery Manuscripts at the Boston Public Library. This paper will share results of this study, and also describe the process of designing the experiment and the metrics used to evaluate each transcription method. These include the comparison of aggregate transcription results with ground truth data; evaluation of annotation methods; the time it took for volunteers to complete transcribing each dataset; and the level of engagement with other project elements such as posting on the message board or reading supporting documentation. Particular focus will be given to the (at times) competing goals of data quality, efficiency, volunteer engagement, and user retention, all of which are of high importance for projects that focus on data from galleries, libraries, archives and museums. Ultimately, this paper aims to provide a model for impactful, intentional design and study of online crowdsourcing transcription methods, as well as shed light on the associations between project design, methodology and outcomes

    Individual vs. Collaborative Methods of Crowdsourced Transcription

    No full text
    International audienceWhile online crowdsourced text transcription projects have proliferated in the last decade, there is a need within the broader field to understand differences in project outcomes as they relate to task design, as well as to experiment with different models of online crowdsourced transcription that have not yet been explored. The experiment discussed in this paper involves the evaluation of newly-built tools on the Zooniverse.org crowdsourcing platform, attempting to answer the research question: "Does the current Zooniverse methodology of multiple independent transcribers and aggregation of results render higher-quality outcomes than allowing volunteers to see previous transcriptions and/or markings by other users? How does each methodology impact the quality and depth of analysis and participation?" To answer these questions, the Zooniverse team ran an A/B experiment on the project Anti-Slavery Manuscripts at the Boston Public Library. This paper will share results of this study, and also describe the process of designing the experiment and the metrics used to evaluate each transcription method. These include the comparison of aggregate transcription results with ground truth data; evaluation of annotation methods; the time it took for volunteers to complete transcribing each dataset; and the level of engagement with other project elements such as posting on the message board or reading supporting documentation. Particular focus will be given to the (at times) competing goals of data quality, efficiency, volunteer engagement, and user retention, all of which are of high importance for projects that focus on data from galleries, libraries, archives and museums. Ultimately, this paper aims to provide a model for impactful, intentional design and study of online crowdsourcing transcription methods, as well as shed light on the associations between project design, methodology and outcomes

    Individual vs. Collaborative Methods of Crowdsourced Transcription

    No full text
    While online crowdsourced text transcription projects have proliferated in the last decade, there is a need within the broader field to understand differences in project outcomes as they relate to task design, as well as to experiment with different models of online crowdsourced transcription that have not yet been explored. The experiment discussed in this paper involves the evaluation of newly-built tools on the Zooniverse.org crowdsourcing platform, attempting to answer the research question: "Does the current Zooniverse methodology of multiple independent transcribers and aggregation of results render higher-quality outcomes than allowing volunteers to see previous transcriptions and/or markings by other users? How does each methodology impact the quality and depth of analysis and participation?" To answer these questions, the Zooniverse team ran an A/B experiment on the project Anti-Slavery Manuscripts at the Boston Public Library. This paper will share results of this study, and also describe the process of designing the experiment and the metrics used to evaluate each transcription method. These include the comparison of aggregate transcription results with ground truth data; evaluation of annotation methods; the time it took for volunteers to complete transcribing each dataset; and the level of engagement with other project elements such as posting on the message board or reading supporting documentation. Particular focus will be given to the (at times) competing goals of data quality, efficiency, volunteer engagement, and user retention, all of which are of high importance for projects that focus on data from galleries, libraries, archives and museums. Ultimately, this paper aims to provide a model for impactful, intentional design and study of online crowdsourcing transcription methods, as well as shed light on the associations between project design, methodology and outcomes

    Individual vs. Collaborative Methods of Crowdsourced Transcription

    No full text
    While online crowdsourced text transcription projects have proliferated in the last decade, there is a need within the broader field to understand differences in project outcomes as they relate to task design, as well as to experiment with different models of online crowdsourced transcription that have not yet been explored. The experiment discussed in this paper involves the evaluation of newly-built tools on the Zooniverse.org crowdsourcing platform, attempting to answer the research question: "Does the current Zooniverse methodology of multiple independent transcribers and aggregation of results render higher-quality outcomes than allowing volunteers to see previous transcriptions and/or markings by other users? How does each methodology impact the quality and depth of analysis and participation?" To answer these questions, the Zooniverse team ran an A/B experiment on the project Anti-Slavery Manuscripts at the Boston Public Library. This paper will share results of this study, and also describe the process of designing the experiment and the metrics used to evaluate each transcription method. These include the comparison of aggregate transcription results with ground truth data; evaluation of annotation methods; the time it took for volunteers to complete transcribing each dataset; and the level of engagement with other project elements such as posting on the message board or reading supporting documentation. Particular focus will be given to the (at times) competing goals of data quality, efficiency, volunteer engagement, and user retention, all of which are of high importance for projects that focus on data from galleries, libraries, archives and museums. Ultimately, this paper aims to provide a model for impactful, intentional design and study of online crowdsourcing transcription methods, as well as shed light on the associations between project design, methodology and outcomes
    corecore