29 research outputs found

    Video shot boundary detection: seven years of TRECVid activity

    Get PDF
    Shot boundary detection (SBD) is the process of automatically detecting the boundaries between shots in video. It is a problem which has attracted much attention since video became available in digital form as it is an essential pre-processing step to almost all video analysis, indexing, summarisation, search, and other content-based operations. Automatic SBD was one of the tracks of activity within the annual TRECVid benchmarking exercise, each year from 2001 to 2007 inclusive. Over those seven years we have seen 57 different research groups from across the world work to determine the best approaches to SBD while using a common dataset and common scoring metrics. In this paper we present an overview of the TRECVid shot boundary detection task, a high-level overview of the most significant of the approaches taken, and a comparison of performances, focussing on one year (2005) as an example

    So what can we actually do with content-based video retrieval?

    Get PDF
    In this talk I will give a roller-coaster survey of the state of the art in automatic video analysis, indexing, summarisation, search and browsing as demonstrated in the annual TRECVid benchmarking evaluation campaign. I will concentrate on content-based techniques for video management which form a complement to the dominant paradigm of metadata or tag-based video management and I will use example techniques to illustrate these

    TRECVID 2008 - goals, tasks, data, evaluation mechanisms and metrics

    Get PDF
    The TREC Video Retrieval Evaluation (TRECVID) 2008 is a TREC-style video analysis and retrieval evaluation, the goal of which remains to promote progress in content-based exploitation of digital video via open, metrics-based evaluation. Over the last 7 years this effort has yielded a better understanding of how systems can effectively accomplish such processing and how one can reliably benchmark their performance. In 2008, 77 teams (see Table 1) from various research organizations --- 24 from Asia, 39 from Europe, 13 from North America, and 1 from Australia --- participated in one or more of five tasks: high-level feature extraction, search (fully automatic, manually assisted, or interactive), pre-production video (rushes) summarization, copy detection, or surveillance event detection. The copy detection and surveillance event detection tasks are being run for the first time in TRECVID. This paper presents an overview of TRECVid in 2008

    TRECVID 2007 - Overview

    Get PDF

    Video Stream Retrieval of Unseen Queries using Semantic Memory

    Get PDF
    Retrieval of live, user-broadcast video streams is an under-addressed and increasingly relevant challenge. The on-line nature of the problem requires temporal evaluation and the unforeseeable scope of potential queries motivates an approach which can accommodate arbitrary search queries. To account for the breadth of possible queries, we adopt a no-example approach to query retrieval, which uses a query's semantic relatedness to pre-trained concept classifiers. To adapt to shifting video content, we propose memory pooling and memory welling methods that favor recent information over long past content. We identify two stream retrieval tasks, instantaneous retrieval at any particular time and continuous retrieval over a prolonged duration, and propose means for evaluating them. Three large scale video datasets are adapted to the challenge of stream retrieval. We report results for our search methods on the new stream retrieval tasks, as well as demonstrate their efficacy in a traditional, non-streaming video task.Comment: Presented at BMVC 2016, British Machine Vision Conference, 201

    TRECVID 2014 -- An Overview of the Goals, Tasks, Data, Evaluation Mechanisms and Metrics

    No full text
    International audienceThe TREC Video Retrieval Evaluation (TRECVID) 2014 was a TREC-style video analysis and retrieval evaluation, the goal of which remains to promote progress in content-based exploitation of digital video via open, metrics-based evaluation. Over the last dozen years this effort has yielded a better under- standing of how systems can effectively accomplish such processing and how one can reliably benchmark their performance. TRECVID is funded by the NIST with support from other US government agencies. Many organizations and individuals worldwide contribute significant time and effort

    Video-4-Video: using video for searching, classifying and summarising video

    Get PDF
    YouTube has meant that we are now becoming accustomed to searching for video clips, and finding them, for both work and leisure pursuits. But YouTube, like the Internet Archive, OpenVideo and almost everything other video library, doesn't use video to find video, it uses metadata, usually based on user generated content (UGC). But what if we don't know what we're looking for and the metadata doesn't help, or we have poor metadata or no UGC, can we use the video to find video ? Can we automatically derive semantic concepts directly from video which we can use for retrieval or summarisation ? Many dozens of research groups throughout the world work on the problems associated with content-based video search, content-based detection of semantic concepts, shot boundary detection, content-based summarisation and content-based event detection. In this presentation we give a summary of the achievements of almost a decade of research by the TRECVid community, including a report on performance of groups in different TRECVid tasks. We present the modus operandi of the annual TRECVid benchmarking, the problems associated with running an annual evaluation for nearly 100 research groups every year and an overview of the most successful approaches to each task
    corecore