161 research outputs found

    Building and Using Digital Libraries for ETDs

    Get PDF
    Despite the high value of electronic theses and dissertations (ETDs), the global collection has seen limited use. To extend such use, a new approach to building digital libraries (DLs) is needed. Fortunately, recent decades have seen that a vast amount of “gray literature” has become available through a diverse set of institutional repositories as well as regional and national libraries and archives. Most of the works in those collections include ETDs and are often freely available in keeping with the open-access movement, but such access is limited by the services of supporting information systems. As explained through a set of scenarios, ETDs can better meet the needs of diverse stakeholders if customer discovery methods are used to identify personas and user roles as well as their goals and tasks. Hence, DLs, with a rich collection of services, as well as newer, more advanced ones, can be organized so that those services, and expanded workflows building on them, can be adapted to meet personalized goals as well as traditional ones, such as discovery and exploration

    WARCreate: Create Wayback-Consumable WARC Files From Any Webpage

    Get PDF
    The Internet Archive\u27s Wayback Machine is the most common way that typical users interact with web archives. The Internet Archive uses the Heritrix web crawler to transform pages on the publicly available web into Web ARChive (WARC) files, which can then be accessed using the Wayback Machine. Because Heritrix can only access the publicly available web, many personal pages (e.g. password-protected pages, social media pages) cannot be easily archived into the standard WARC format. We have created a Google Chrome extension, WARCreate, that allows a user to create a WARC file from any webpage. Using this tool, content that might have been otherwise lost in time can be archived in a standard format by any user. This tool provides a way for casual users to easily create archives of personal online content. This is one of the first steps in resolving issues of long term storage, maintenance, and access of personal digital assets that have emotional, intellectual, and historical value to individuals

    Detecting, Modeling, and Predicting User Temporal Intention

    Get PDF
    The content of social media has grown exponentially in the recent years and its role has evolved from narrating life events to actually shaping them. Unfortunately, content posted and shared in social networks is vulnerable and prone to loss or change, rendering the context associated with it (a tweet, post, status, or others) meaningless. There is an inherent value in maintaining the consistency of such social records as in some cases they take over the task of being the first draft of history as collections of these social posts narrate the pulse of the street during historic events, protest, riots, elections, war, disasters, and others as shown in this work. The user sharing the resource has an implicit temporal intent: either the state of the resource at the time of sharing, or the current state of the resource at the time of the reader \clicking . In this research, we propose a model to detect and predict the user\u27s temporal intention of the author upon sharing content in the social network and of the reader upon resolving this content. To build this model, we first examine the three aspects of the problem: the resource, time, and the user. For the resource we start by analyzing the content on the live web and its persistence. We noticed that a portion of the resources shared in social media disappear, and with further analysis we unraveled a relationship between this disappearance and time. We lose around 11% of the resources after one year of sharing and a steady 7% every following year. With this, we turn to the public archives and our analysis reveals that not all posted resources are archived and even they were an average 8% per year disappears from the archives and in some cases the archived content is heavily damaged. These observations prove that in regards to archives resources are not well-enough populated to consistently and reliably reconstruct the missing resource as it existed at the time of sharing. To analyze the concept of time we devised several experiments to estimate the creation date of the shared resources. We developed Carbon Date, a tool which successfully estimated the correct creation dates for 76% of the test sets. Since the resources\u27 creation we wanted to measure if and how they change with time. We conducted a longitudinal study on a data set of very recently-published tweet-resource pairs and recording observations hourly. We found that after just one hour, ~4% of the resources have changed by ≥30% while after a day the change rate slowed to be ~12% of the resources changed by ≥40%. In regards to the third and final component of the problem we conducted user behavioral analysis experiments and built a data set of 1,124 instances manually assigned by test subjects. Temporal intention proved to be a difficult concept for average users to understand. We developed our Temporal Intention Relevancy Model (TIRM) to transform the highly subjective temporal intention problem into the more easily understood idea of relevancy between a tweet and the resource it links to, and change of the resource through time. On our collected data set TIRM produced a significant 90.27% success rate. Furthermore, we extended TIRM and used it to build a time-based model to predict temporal intention change or steadiness at the time of posting with 77% accuracy. We built a service API around this model to provide predictions and a few prototypes. Future tools could implement TIRM to assist users in pushing copies of shared resources into public web archives to ensure the integrity of the historical record. Additional tools could be used to assist the mining of the existing social media corpus by derefrencing the intended version of the shared resource based on the intention strength and the time between the tweeting and mining

    A review of the role of sensors in mobile context-aware recommendation systems

    Get PDF
    Recommendation systems are specialized in offering suggestions about specific items of different types (e.g., books, movies, restaurants, and hotels) that could be interesting for the user. They have attracted considerable research attention due to their benefits and also their commercial interest. Particularly, in recent years, the concept of context-aware recommendation system has appeared to emphasize the importance of considering the context of the situations in which the user is involved in order to provide more accurate recommendations. The detection of the context requires the use of sensors of different types, which measure different context variables. Despite the relevant role played by sensors in the development of context-aware recommendation systems, sensors and recommendation approaches are two fields usually studied independently. In this paper, we provide a survey on the use of sensors for recommendation systems. Our contribution can be seen from a double perspective. On the one hand, we overview existing techniques used to detect context factors that could be relevant for recommendation. On the other hand, we illustrate the interest of sensors by considering different recommendation use cases and scenarios

    Diving in at the deep end : the value of alternative in-situ approaches for systematic library search

    Get PDF
    OPAC interfaces, still the dominant access point to library catalogs, support systematic search but are problematic for open-ended exploration and generally unpopular with visitors. As a result, libraries start subscribing to simplified search paradigms as exemplified by web-search systems. This is a problem considering that systematic search is a crucial skill in the light of today’s abundance of digital information. Inspired by novel approaches to facilitating search, we designed CollectionDiver, an installation for supporting systematic search in public libraries. The CollectionDiver combines tangible and large display direct-touch interaction with a visual representation of search criteria and filters. We conducted an in-situ qualitative study to compare participants’ search approaches on the CollectionDiver with those on the OPAC interface. Our findings show that while both systems support a similar search process, the CollectionDiver (1) makes systematic search more accessible, (2) motivates proactive search approaches by (3) adding transparency to the search process, and (4) facilitates shared search experiences. We discuss the CollectionDiver’s design concepts to stimulate new ideas toward supporting engaging approaches to systematic search in the library context and beyond.Postprin

    Co-reading: investigating collaborative group reading.

    Get PDF
    Collaborative reading, or co-reading as we call it, is ubiquitous—it occurs, for instance, in classrooms, book-clubs, and in less coordinated ways through mass media. While individual digital reading has been the subject of much investigation, research into co-reading is scarce. We report a two-phase field study of group reading to identify an initial set of user requirements. A co-reading interface is then designed that facilitates the coordination of group reading by providing temporary ‘Point-out’ markers to indicate specific locations within documents. A user study compared this new system with collaborative reading on paper, with a positive outcome; the differences in user behavior between paper and the new interface reveal intriguing insights into user needs and the potential benefits of digital media for co-reading

    Using Web Archives to Enrich the Live Web Experience Through Storytelling

    Get PDF
    Much of our cultural discourse occurs primarily on the Web. Thus, Web preservation is a fundamental precondition for multiple disciplines. Archiving Web pages into themed collections is a method for ensuring these resources are available for posterity. Services such as Archive-It exists to allow institutions to develop, curate, and preserve collections of Web resources. Understanding the contents and boundaries of these archived collections is a challenge for most people, resulting in the paradox of the larger the collection, the harder it is to understand. Meanwhile, as the sheer volume of data grows on the Web, storytelling is becoming a popular technique in social media for selecting Web resources to support a particular narrative or story . In this dissertation, we address the problem of understanding the archived collections through proposing the Dark and Stormy Archive (DSA) framework, in which we integrate storytelling social media and Web archives. In the DSA framework, we identify, evaluate, and select candidate Web pages from archived collections that summarize the holdings of these collections, arrange them in chronological order, and then visualize these pages using tools that users already are familiar with, such as Storify. To inform our work of generating stories from archived collections, we start by building a baseline for the structural characteristics of popular (i.e., receiving the most views) human-generated stories through investigating stories from Storify. Furthermore, we checked the entire population of Archive-It collections for better understanding the characteristics of the collections we intend to summarize. We then filter off-topic pages from the collections the using different methods to detect when an archived page in a collection has gone off-topic. We created a gold standard dataset from three Archive-It collections to evaluate the proposed methods at different thresholds. From the gold standard dataset, we identified five behaviors for the TimeMaps (a list of archived copies of a page) based on the page’s aboutness. Based on a dynamic slicing algorithm, we divide the collection and cluster the pages in each slice. We then select the best representative page from each cluster based on different quality metrics (e.g., the replay quality, and the quality of the generated snippet from the page). At the end, we put the selected pages in chronological order and visualize them using Storify. For evaluating the DSA framework, we obtained a ground truth dataset of hand-crafted stories from Archive-It collections generated by expert archivists. We used Amazon’s Mechanical Turk to evaluate the automatically generated stories against the stories that were created by domain experts. The results show that the automatically generated stories by the DSA are indistinguishable from those created by human subject domain experts, while at the same time both kinds of stories (automatic and human) are easily distinguished from randomly generated storie
    corecore