700 research outputs found

    ‘Searching for District 9 in the Archives: archaeology of a transmedia Campaign’

    Get PDF
    Film marketing materials have conventionally been regarded as both ephemera and ephemeral but in a digital environment they have become increasingly significant colonising the spaces before, between and beyond the film itself. Indeed the distinctions between promotion and content have become so blurred that, arguably, marketing campaigns have become as entertaining as the films they promote, raising questions about the cultural value of such ephemera. This project set out to examine what transmedia contributes to the narrative ecology of the film and took the award winning campaign designed by the marketing agency, Trigger for Neil Blomkamp’s District 9 (2009) as a starting point. But the research did not get off to an auspicious start because shortly after the project began, the site disappeared. This paper will give an account of a media archaeological excavation to find for District 9’s web campaign. During the search archival sites encountered included institutions set up with the aim of preservation such as the Internet Archive, commercial archives such as the Webby awards as well the ‘new’ generation of web 2.0 archives – a personal blog, YouTube and social media sites. In the light of this, the paper will then reflect on what the German media theorist Wolfgang Ernst referred to as the ‘machine perspective’ and how the mechanisms of the digital archives condition the way we know things about the recent digital past. It will conclude by suggesting that these archival encounters in this research project revealed as much about the nature of digital archives as the film transmediation.Non peer reviewe

    Spoken content retrieval: A survey of techniques and technologies

    Get PDF
    Speech media, that is, digital audio and video containing spoken content, has blossomed in recent years. Large collections are accruing on the Internet as well as in private and enterprise settings. This growth has motivated extensive research on techniques and technologies that facilitate reliable indexing and retrieval. Spoken content retrieval (SCR) requires the combination of audio and speech processing technologies with methods from information retrieval (IR). SCR research initially investigated planned speech structured in document-like units, but has subsequently shifted focus to more informal spoken content produced spontaneously, outside of the studio and in conversational settings. This survey provides an overview of the field of SCR encompassing component technologies, the relationship of SCR to text IR and automatic speech recognition and user interaction issues. It is aimed at researchers with backgrounds in speech technology or IR who are seeking deeper insight on how these fields are integrated to support research and development, thus addressing the core challenges of SCR

    Content-based video retrieval: three example systems from TRECVid

    Get PDF
    The growth in available online video material over the internet is generally combined with user-assigned tags or content description, which is the mechanism by which we then access such video. However, user-assigned tags have limitations for retrieval and often we want access where the content of the video itself is directly matched against a user’s query rather than against some manually assigned surrogate tag. Content-based video retrieval techniques are not yet scalable enough to allow interactive searching on internet-scale, but the techniques are proving robust and effective for smaller collections. In this paper we show 3 exemplar systems which demonstrate the state of the art in interactive, content-based retrieval of video shots, and these three are just three of the more than 20 systems developed for the 2007 iteration of the annual TRECVid benchmarking activity. The contribution of our paper is to show that retrieving from video using content-based methods is now viable, that it works, and that there are many systems which now do this, such as the three outlined herein. These systems, and others can provide effective search on hundreds of hours of video content and are samples of the kind of content-based search functionality we can expect to see on larger video archives when issues of scale are addressed

    VideoAnalysis4ALL: An On-line Tool for the Automatic Fragmentation and Concept-based Annotation, and the Interactive Exploration of Videos.

    Get PDF
    This paper presents the VideoAnalysis4ALL tool that supports the automatic fragmentation and concept-based annotation of videos, and the exploration of the annotated video fragments through an interactive user interface. The developed web application decomposes the video into two different granularities, namely shots and scenes, and annotates each fragment by evaluating the existence of a number (several hundreds) of high-level visual concepts in the keyframes extracted from these fragments. Through the analysis the tool enables the identification and labeling of semantically coherent video fragments, while its user interfaces allow the discovery of these fragments with the help of human-interpretable concepts. The integrated state-of-the-art video analysis technologies perform very well and, by exploiting the processing capabilities of multi-thread / multi-core architectures, reduce the time required for analysis to approximately one third of the video’s duration, thus making the analysis three times faster than real-time processing

    The Politicization of Art on the Internet: From net.art to post-internet art

    Get PDF
    Este estudo tem como objetivo apresentar uma breve perspetiva sobre as manifestaçÔes socioculturais que se propagaram a partir do surgimento da Web; tendo como principal foco de anĂĄlise o desenvolvimento da produção de Internet Arte na Europa e na AmĂ©rica do Norte ao longo dos Ășltimos 30 anos. Estruturado como um estudo de caso, trĂȘs conceitos-chave fundamentam a base desta pesquisa: uma breve histĂłria da Internet, o desenvolvimento do termo hacker e a produção de arte web-based; da net.art atĂ© a Arte PĂłs-Internet. Em abordagem cronolĂłgica, estes campos serĂŁo descritos e posteriormente utilizados como guias para um final encadeamento comparativo que visa sustentar a hipĂłtese da gradual dissolução de um ciberespaço utĂłpico atĂ© o distĂłpico cenĂĄrio corporativo que constitui a Internet dos dias atuais.This study aims to present a brief perspective of the sociocultural manifestations that emerged after the Web birth, focusing on the development of Internet Art and the countercultural movements that emerged inside Europe and North America over the last 30 years. Under a case study structure, three fundamental subjects will be firstly explained: Internet history, the development of hacker concept and the web-based Art transformations: from net.art till Post-Internet Art. Chronologically described, these fields will lead to a final comparison of chained events that aim to sustain the hypothesis of the gradual dissolution of the early cyberspace utopias till the dystopic scene existent in nowadays Internet

    Scanned Document Compression Technique

    Get PDF
    These days’ different media records are utilized to impart data. The media documents are content records, picture, sound, video and so forth. All these media documents required substantial measure of spaces when it is to be exchanged. Regular five page report records involve 75 KB of space, though a solitary picture can take up around 1.4 MB. In our paper, fundamental center is on two pressure procedures which are named as DjVU pressure strategy and the second is Block-based Hybrid Video Codec. In which we will chiefly concentrate on DjVU pressure strategy. DjVu is a picture pressure procedure particularly equipped towards the pressure of checked records in shading at high determination. Run of the mill magazine pages in shading filtered at 300dpi are compacted to somewhere around 40 and 80 KB, or 5 to 10 times littler than with JPEG for a comparative level of subjective quality. The frontal area layer, which contains the content and drawings and requires high spatial determination, is isolated from the foundation layer, which contains pictures and foundations and requires less determination. The closer view is packed with a bi-tonal picture pressure system that exploits character shape similitudes. The foundation is compacted with another dynamic, wavelet-based pressure strategy. A constant, memory proficient variant of the decoder is accessible as a module for famous web programs. We likewise exhibit that the proposed division calculation can enhance the nature of decoded reports while at the same time bringing down the bit rate

    Opal: In Vivo Based Preservation Framework for Locating Lost Web Pages

    Get PDF
    We present Opal, a framework for interactively locating missing web pages (http status code 404). Opal is an example of in vivo preservation: harnessing the collective behavior of web archives, commercial search engines, and research projects for the purpose of preservation. Opal servers learn from their experiences and are able to share their knowledge with other Opal servers using the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH). Using cached copies that can be found on the web, Opal creates lexical signatures which are then used to search for similar versions of the web page. Using the OAI-PMH to facilitate inter-Opal learning extends the utilization of OAI-PMH in a novel manner. We present the architecture of the Opal framework, discuss a reference implementation of the framework, and present a quantitative analysis of the framework that indicates that Opal could be effectively deployed

    COSPO/CENDI Industry Day Conference

    Get PDF
    The conference's objective was to provide a forum where government information managers and industry information technology experts could have an open exchange and discuss their respective needs and compare them to the available, or soon to be available, solutions. Technical summaries and points of contact are provided for the following sessions: secure products, protocols, and encryption; information providers; electronic document management and publishing; information indexing, discovery, and retrieval (IIDR); automated language translators; IIDR - natural language capabilities; IIDR - advanced technologies; IIDR - distributed heterogeneous and large database support; and communications - speed, bandwidth, and wireless
    • 

    corecore