80,461 research outputs found

    Balancing simplicity and functionality in designing user-interface for an interactive TV

    Get PDF
    Recent computer vision and content-based multimedia techniques such as scene segmentation, face detection, searching through video clips, and video summarisation are potentially useful tools in enhancing the usefulness of an interactive TV (iTV). However, the technical nature and the relative immaturity of these tools means it is difficult to represent new functionalities afforded by these techniques in an easy-to-use manner on a TV interface where simplicity is critical and the viewers are not necessarily proficient in using advanced or highly-sophisticated interaction using a remote control. By introducing multiple layers of interaction sophistication and the unobtrusive semi-transparent panels that can be immediately invoked without menu hierarchy or complex sequence of actions, we developed an iTV application featuring powerful content retrieval techniques yet providing a streamlined and simple interface that gracefully leverages these techniques. Initial version of the interface is ready for demonstration

    Symbiosis between the TRECVid benchmark and video libraries at the Netherlands Institute for Sound and Vision

    Get PDF
    Audiovisual archives are investing in large-scale digitisation efforts of their analogue holdings and, in parallel, ingesting an ever-increasing amount of born- digital files in their digital storage facilities. Digitisation opens up new access paradigms and boosted re-use of audiovisual content. Query-log analyses show the shortcomings of manual annotation, therefore archives are complementing these annotations by developing novel search engines that automatically extract information from both audio and the visual tracks. Over the past few years, the TRECVid benchmark has developed a novel relationship with the Netherlands Institute of Sound and Vision (NISV) which goes beyond the NISV just providing data and use cases to TRECVid. Prototype and demonstrator systems developed as part of TRECVid are set to become a key driver in improving the quality of search engines at the NISV and will ultimately help other audiovisual archives to offer more efficient and more fine-grained access to their collections. This paper reports the experiences of NISV in leveraging the activities of the TRECVid benchmark

    Balancing the power of multimedia information retrieval and usability in designing interactive TV

    Get PDF
    Steady progress in the field of multimedia information retrieval (MMIR) promises a useful set of tools that could provide new usage scenarios and features to enhance the user experience in today s digital media applications. In the interactive TV domain, the simplicity of interaction is more crucial than in any other digital media domain and ultimately determines the success or otherwise of any new applications. Thus when integrating emerging tools like MMIR into interactive TV, the increase in interface complexity and sophistication resulting from these features can easily reduce its actual usability. In this paper we describe a design strategy we developed as a result of our e®ort in balancing the power of emerging multimedia information retrieval techniques and maintaining the simplicity of the interface in interactive TV. By providing multiple levels of interface sophistication in increasing order as a viewer repeatedly presses the same button on their remote control, we provide a layered interface that can accommodate viewers requiring varying degrees of power and simplicity. A series of screen shots from the system we have actually developed and built illustrates how this is achieved

    TRECVID 2007 - Overview

    Get PDF

    TRECVID 2008 - goals, tasks, data, evaluation mechanisms and metrics

    Get PDF
    The TREC Video Retrieval Evaluation (TRECVID) 2008 is a TREC-style video analysis and retrieval evaluation, the goal of which remains to promote progress in content-based exploitation of digital video via open, metrics-based evaluation. Over the last 7 years this effort has yielded a better understanding of how systems can effectively accomplish such processing and how one can reliably benchmark their performance. In 2008, 77 teams (see Table 1) from various research organizations --- 24 from Asia, 39 from Europe, 13 from North America, and 1 from Australia --- participated in one or more of five tasks: high-level feature extraction, search (fully automatic, manually assisted, or interactive), pre-production video (rushes) summarization, copy detection, or surveillance event detection. The copy detection and surveillance event detection tasks are being run for the first time in TRECVID. This paper presents an overview of TRECVid in 2008

    CHORUS Deliverable 3.3: Vision Document - Intermediate version

    Get PDF
    The goal of the CHORUS vision document is to create a high level vision on audio-visual search engines in order to give guidance to the future R&D work in this area (in line with the mandate of CHORUS as a Coordination Action). This current intermediate draft of the CHORUS vision document (D3.3) is based on the previous CHORUS vision documents D3.1 to D3.2 and on the results of the six CHORUS Think-Tank meetings held in March, September and November 2007 as well as in April, July and October 2008, and on the feedback from other CHORUS events. The outcome of the six Think-Thank meetings will not just be to the benefit of the participants which are stakeholders and experts from academia and industry – CHORUS, as a coordination action of the EC, will feed back the findings (see Summary) to the projects under its purview and, via its website, to the whole community working in the domain of AV content search. A few subjections of this deliverable are to be completed after the eights (and presumably last) Think-Tank meeting in spring 2009

    Coherence compilation: applying AIED techniques to the reuse of educational resources

    Get PDF
    The HomeWork project is building an exemplar system to provide individualised experiences for individual and groups of children aged 6-7 years, their parents, teachers and classmates at school. It employs an existing set of broadcast video media and associated resources that tackle both numeracy and literacy at Key Stage 1. The system employs a learner model and a pedagogical model to identify what resource is best used with an individual child or group of children collaboratively at a particular learning point and at a particular location. The Coherence Compiler is that component of the system which is designed to impose an overall narrative coherence on the materials that any particular child is exposed to. This paper presents a high level vision of the design of the Coherence Compiler and sets its design within the overall framework of the HomeWork project and its learner and pedagogical models

    Spin/3 Magazine: Action Time Vision

    Full text link
    Collaboration with London design group Spin, with contributing essays by Russ Bestley and Malcolm Garrett, on the subject of punk graphic design. Published as large format newspaper in plastic slipcase

    Implementation and analysis of several keyframe-based browsing interfaces to digital video

    Get PDF
    In this paper we present a variety of browsing interfaces for digital video information. The six interfaces are implemented on top of Físchlár, an operational recording, indexing, browsing and playback system for broadcast TV programmes. In developing the six browsing interfaces, we have been informed by the various dimensions which can be used to distinguish one interface from another. For this we include layeredness (the number of “layers” of abstraction which can be used in browsing a programme), the provision or omission of temporal information (varying from full timestamp information to nothing at all on time) and visualisation of spatial vs. temporal aspects of the video. After introducing and defining these dimensions we then locate some common browsing interfaces from the literature in this 3-dimensional “space” and then we locate our own six interfaces in this same space. We then present an outline of the interfaces and include some user feedback
    corecore