12 research outputs found

    Using Web Archives to Enrich the Live Web Experience Through Storytelling

    Get PDF
    Much of our cultural discourse occurs primarily on the Web. Thus, Web preservation is a fundamental precondition for multiple disciplines. Archiving Web pages into themed collections is a method for ensuring these resources are available for posterity. Services such as Archive-It exists to allow institutions to develop, curate, and preserve collections of Web resources. Understanding the contents and boundaries of these archived collections is a challenge for most people, resulting in the paradox of the larger the collection, the harder it is to understand. Meanwhile, as the sheer volume of data grows on the Web, storytelling is becoming a popular technique in social media for selecting Web resources to support a particular narrative or story . In this dissertation, we address the problem of understanding the archived collections through proposing the Dark and Stormy Archive (DSA) framework, in which we integrate storytelling social media and Web archives. In the DSA framework, we identify, evaluate, and select candidate Web pages from archived collections that summarize the holdings of these collections, arrange them in chronological order, and then visualize these pages using tools that users already are familiar with, such as Storify. To inform our work of generating stories from archived collections, we start by building a baseline for the structural characteristics of popular (i.e., receiving the most views) human-generated stories through investigating stories from Storify. Furthermore, we checked the entire population of Archive-It collections for better understanding the characteristics of the collections we intend to summarize. We then filter off-topic pages from the collections the using different methods to detect when an archived page in a collection has gone off-topic. We created a gold standard dataset from three Archive-It collections to evaluate the proposed methods at different thresholds. From the gold standard dataset, we identified five behaviors for the TimeMaps (a list of archived copies of a page) based on the page’s aboutness. Based on a dynamic slicing algorithm, we divide the collection and cluster the pages in each slice. We then select the best representative page from each cluster based on different quality metrics (e.g., the replay quality, and the quality of the generated snippet from the page). At the end, we put the selected pages in chronological order and visualize them using Storify. For evaluating the DSA framework, we obtained a ground truth dataset of hand-crafted stories from Archive-It collections generated by expert archivists. We used Amazon’s Mechanical Turk to evaluate the automatically generated stories against the stories that were created by domain experts. The results show that the automatically generated stories by the DSA are indistinguishable from those created by human subject domain experts, while at the same time both kinds of stories (automatic and human) are easily distinguished from randomly generated storie

    Tools Managing Seed URls (Detecting Off-Topic Pages)

    Get PDF
    PDF of a powerpoint presentation from the Columbia University Web Archiving Collaboration: New Tools and Models Conference, in New York, New York, June 4-5, 2015. Also available on Slideshare.https://digitalcommons.odu.edu/computerscience_presentations/1034/thumbnail.jp

    Towards computational reproducibility: researcher perspectives on the use and sharing of software

    Get PDF
    Research software, which includes both source code and executables used as part of the research process, presents a significant challenge for efforts aimed at ensuring reproducibility. In order to inform such efforts, we conducted a survey to better understand the characteristics of research software as well as how it is created, used, and shared by researchers. Based on the responses of 215 participants, representing a range of research disciplines, we found that researchers create, use, and share software in a wide variety of forms for a wide variety of purposes, including data collection, data analysis, data visualization, data cleaning and organization, and automation. More participants indicated that they use open source software than commercial software. While a relatively small number of programming languages (e.g., Python, R, JavaScript, C++, MATLAB) are used by a large number, there is a long tail of languages used by relatively few. Between-group comparisons revealed that significantly more participants from computer science write source code and create executables than participants from other disciplines. Differences between researchers from computer science and other disciplines related to the knowledge of best practices of software creation and sharing were not statistically significant. While many participants indicated that they draw a distinction between the sharing and preservation of software, related practices and perceptions were often not aligned with those of the broader scholarly communications community

    Storytelling for Summarizing Collections in Web Archives

    Get PDF
    PDF of a powerpoint presentation from the Coalition for Networked Information (CNI) Spring 2016 Membership Meeting in San Antonio, Texas, April 5, 2016. Also available on Slideshare.https://digitalcommons.odu.edu/computerscience_presentations/1005/thumbnail.jp

    Tools for Managing the Past Web

    Get PDF
    PDF of a powerpoint presentation from the Archive-It Partners Meeting in Montgomery, Alabama, November 18, 2014. Also available on Slideshare.https://digitalcommons.odu.edu/computerscience_presentations/1032/thumbnail.jp

    Data: Researcher Perspectives on the Use and Sharing of Software

    No full text

    yasmina85/DSA-stories: This is the release of V1.0 of generating stories from archived collection software

    No full text
    The stories that was generated using the DSA framework

    Access Patterns for Robots and Humans in Web Archives

    No full text
    Although user access patterns on the live web are wellunderstood, there has been no corresponding study of how users, both humans and robots, access web archives. Based on samples from the Internet Archive’s public Wayback Machine, we propose a set of basic usage patterns: Dip (a single access), Slide (the same page at different archive times), Dive (different pages at approximately the same archive time), and Skim (lists of what pages are archived, i.e., Time-Maps). Robots are limited almost exclusively to Dips and Skims, but human accesses are more varied between all four types. Robots outnumber humans 10:1 in terms of sessions, 5:4 in terms of raw HTTP accesses, and 4:1 in terms of megabytes transferred. Robots almost always access Time-Maps (95 % of accesses), but humans predominately access the archived web pages themselves (82 % of accesses). In terms of unique archived web pages, there is no overall preference for a particular time, but the recent past (within the last year) shows significant repeat accesses

    Who and What Links to the Internet Archive

    Get PDF
    Abstract. The Internet Archive’s (IA) Wayback Machine is the largest and oldest public web archive and has become a significant repository of our recent history and cultural heritage. Despite its importance, there has been little research about how it is discovered and used. Based on web access logs, we analyze what users are looking for, why they come to IA, where they come from, and how pages link to IA. We find that users request English pages the most, followed by the European languages. Most human users come to web archives because they do not find the requested pages on the live web. About 65 % of the requested archived pages no longer exist on the live web. We find that more than 82 % of human sessions connect to the Wayback Machine via referrals from other web sites, while only 15 % of robots have referrers. Most of the links (86%) from websites are to individual archived pages at specific points in time, and of those 83 % no longer exist on the live web
    corecore