5,010 research outputs found

    McLuhan.js: Live Net Art Performance with Remote Web Browsers

    Get PDF
    McLuhan.js is a media art performance platform which engages with the web browser as a source of form and content. The platform enables a new performance scenario: a performer creates live net art actions in the browsers of remote viewers. McLuhan.js contains client-side and server-side tools for creating net art, as well as a live coding performance interface. These tools are designed to remotely control real-time collages of web media, browser windows, and computer art tropes. While the McLuhan.js toolkit is deliberately of its time, it is so because it participates in a tradition of 20th-century artists who reflected on their daily lives by incorporating contemporary communications media into their creative practice. These artists consistently looked outward to found media as sources of inspiration. Empirical and experimental investigations into new media lead these artists to work directly with a medium’s raw materials—often in subversive or unorthodox patterns—to create new forms of art from the technological fabric of their era. The Last Cloud, a net art performance using McLuhan.js, reveals a dialogue with this artistic history. Its composition is described herein, along with a description of the McLuhan.js toolkit and a survey of the history it inherits

    Using Web Archives to Enrich the Live Web Experience Through Storytelling

    Get PDF
    Much of our cultural discourse occurs primarily on the Web. Thus, Web preservation is a fundamental precondition for multiple disciplines. Archiving Web pages into themed collections is a method for ensuring these resources are available for posterity. Services such as Archive-It exists to allow institutions to develop, curate, and preserve collections of Web resources. Understanding the contents and boundaries of these archived collections is a challenge for most people, resulting in the paradox of the larger the collection, the harder it is to understand. Meanwhile, as the sheer volume of data grows on the Web, storytelling is becoming a popular technique in social media for selecting Web resources to support a particular narrative or story . In this dissertation, we address the problem of understanding the archived collections through proposing the Dark and Stormy Archive (DSA) framework, in which we integrate storytelling social media and Web archives. In the DSA framework, we identify, evaluate, and select candidate Web pages from archived collections that summarize the holdings of these collections, arrange them in chronological order, and then visualize these pages using tools that users already are familiar with, such as Storify. To inform our work of generating stories from archived collections, we start by building a baseline for the structural characteristics of popular (i.e., receiving the most views) human-generated stories through investigating stories from Storify. Furthermore, we checked the entire population of Archive-It collections for better understanding the characteristics of the collections we intend to summarize. We then filter off-topic pages from the collections the using different methods to detect when an archived page in a collection has gone off-topic. We created a gold standard dataset from three Archive-It collections to evaluate the proposed methods at different thresholds. From the gold standard dataset, we identified five behaviors for the TimeMaps (a list of archived copies of a page) based on the page’s aboutness. Based on a dynamic slicing algorithm, we divide the collection and cluster the pages in each slice. We then select the best representative page from each cluster based on different quality metrics (e.g., the replay quality, and the quality of the generated snippet from the page). At the end, we put the selected pages in chronological order and visualize them using Storify. For evaluating the DSA framework, we obtained a ground truth dataset of hand-crafted stories from Archive-It collections generated by expert archivists. We used Amazon’s Mechanical Turk to evaluate the automatically generated stories against the stories that were created by domain experts. The results show that the automatically generated stories by the DSA are indistinguishable from those created by human subject domain experts, while at the same time both kinds of stories (automatic and human) are easily distinguished from randomly generated storie

    Emerging technologies for learning report (volume 3)

    Get PDF

    New Frontiers in Universal Multimedia Access

    Get PDF
    Universal Multimedia Access (UMA) refers to the ability to access by any user to the desired multimedia content(s) over any type of network with any device from anywhere and anytime. UMA is a key framework for multimedia content delivery service using metadata. This report consists of three parts. The first part of this report analyzes the state-of-the-art technologies in UMA, identifies the key issues and gives what are the new challenges that still remain to be resolved in UMA. The key issues in UMA include the adaptation of multimedia contents to bridge the gap between content creation and consuming, standardized metadata description that facilitates the adaptation (e.g. MPEG-7, MPEG-21 DIA, CC/PP), and UMA system designing considering its target application. The second part introduces our approach towards these challenges; how to jointly adapt multimedia contents including different modalities and balance their presentation in an optimal way. A scheme for adapting audiovisual contents and its metadata (text) to any screen is proposed to provide the best experience in browsing the desired content. The adaptation process is modeled as an optimization problem of the total value of the content provided to the user. The total content value is optimized by jointly controlling the balance between video and metadata presentation, the transformation of the video content, and the amount of the metadata to be presented. Experimental results show that the proposed adaptation scheme enables users to browse audiovisual contents with their metadata optimized to the screen size of their devices. The last part reports some potential UMA applications especially focusing on a universal access application to TV news archives as an example

    Investigation Report on Universal Multimedia Access

    Get PDF
    Universal Multimedia Access (UMA) refers to the ability to access by any user to the desired multimedia content(s) over any type of network with any device from anywhere and anytime. UMA is a key framework for multimedia content delivery service using metadata. This investigation report analyzes the state-of-the-art technologies in UMA and tries to identify the key issues of UMA. The state-of-the-art in multimedia content adaptation, an overview of the standards that supports the UMA framework, potential privacy problems in UMA systems and some new UMA applications are presented in this report. This report also provides challenges that still remain to be resolved in UMA to make clear the potential key problems in UMA and determine which ones to solve

    Empowering cultural heritage professionals with tools for authoring and deploying personalised visitor experiences

    Get PDF
    This paper presents an authoring environment, which supports cultural heritage professionals in the process of creating and deploying a wide range of different personalised interactive experiences that combine the physical (objects, collection and spaces) and the digital (multimedia content). It is based on a novel flexible formalism that represents the content and the context as independent from one another and allows recombining them in multiple ways thus generating many different interactions from the same elements. The authoring environment was developed in a co-design process with heritage stakeholders and addresses the composition of the content, the definition of the personalisation, and the deployment on a physical configuration of bespoke devices. To simplify the editing while maintaining a powerful representation, the complex creation process is deconstructed into a limited number of elements and phases, including aspects to control personalisation both in content and in interaction. The user interface also includes examples of installations for inspiration and as a means for learning what is possible and how to do it. Throughout the paper, installations in public exhibitions are used to illustrate our points and what our authoring environment can produce. The expressiveness of the formalism and the variety of interactive experiences that could be created was assessed via a range of laboratory tests, while a user-centred evaluation with over 40 cultural heritage professionals assessed whether they feel confident in directly controlling personalisation

    MovieRemix: Having Fun Playing with Videos

    Get PDF
    The process of producing new creative videos by editing, combining, and organizing pre-existing material (e.g., video shots) is a popular phenomenon in the current web scenario. Known as remix or video remix, the produced videomay have new and different meanings with respect to the sourcematerial. Unfortunately, whenmanaging audiovisual objects, the technological aspect can be a burden for many creative users.Motivated by the large success of the gaming market, we propose a novel game and an architecture to make the remix process a pleasant and stimulating gaming experience. MovieRemix allows people to act like amovie director, butinstead of dealing with cast and cameras, the player has to create a remixed video starting from a given screenplay and from video shots retrieved from the provided catalog. MovieRemix is not a simple video editing tool nor is a simple game: it is a challenging environment that stimulates creativity. To temp to play the game, players can access different levels of screenplay (original, outline, derived) and can also challenge other players. Computational and storage issues are kept at the server side, whereas the client device just needs to have the capability of playing streaming videos

    Scripts in a Frame: A Framework for Archiving Deferred Representations

    Get PDF
    Web archives provide a view of the Web as seen by Web crawlers. Because of rapid advancements and adoption of client-side technologies like JavaScript and Ajax, coupled with the inability of crawlers to execute these technologies effectively, Web resources become harder to archive as they become more interactive. At Web scale, we cannot capture client-side representations using the current state-of-the art toolsets because of the migration from Web pages to Web applications. Web applications increasingly rely on JavaScript and other client-side programming languages to load embedded resources and change client-side state. We demonstrate that Web crawlers and other automatic archival tools are unable to archive the resulting JavaScript-dependent representations (what we term deferred representations), resulting in missing or incorrect content in the archives and the general inability to replay the archived resource as it existed at the time of capture. Building on prior studies on Web archiving, client-side monitoring of events and embedded resources, and studies of the Web, we establish an understanding of the trends contributing to the increasing unarchivability of deferred representations. We show that JavaScript leads to lower-quality mementos (archived Web resources) due to the archival difficulties it introduces. We measure the historical impact of JavaScript on mementos, demonstrating that the increased adoption of JavaScript and Ajax correlates with the increase in missing embedded resources. To measure memento and archive quality, we propose and evaluate a metric to assess memento quality closer to Web users’ perception. We propose a two-tiered crawling approach that enables crawlers to capture embedded resources dependent upon JavaScript. Measuring the performance benefits between crawl approaches, we propose a classification method that mitigates the performance impacts of the two-tiered crawling approach, and we measure the frontier size improvements observed with the two-tiered approach. Using the two-tiered crawling approach, we measure the number of client-side states associated with each URI-R and propose a mechanism for storing the mementos of deferred representations. In short, this dissertation details a body of work that explores the following: why JavaScript and deferred representations are difficult to archive (establishing the term deferred representation to describe JavaScript dependent representations); the extent to which JavaScript impacts archivability along with its impact on current archival tools; a metric for measuring the quality of mementos, which we use to describe the impact of JavaScript on archival quality; the performance trade-offs between traditional archival tools and technologies that better archive JavaScript; and a two-tiered crawling approach for discovering and archiving currently unarchivable descendants (representations generated by client-side user events) of deferred representations to mitigate the impact of JavaScript on our archives. In summary, what we archive is increasingly different from what we as interactive users experience. Using the approaches detailed in this dissertation, archives can create mementos closer to what users experience rather than archiving the crawlers’ experiences on the Web
    • …
    corecore