18 research outputs found

    TRECVID 2008 - goals, tasks, data, evaluation mechanisms and metrics

    Get PDF
    The TREC Video Retrieval Evaluation (TRECVID) 2008 is a TREC-style video analysis and retrieval evaluation, the goal of which remains to promote progress in content-based exploitation of digital video via open, metrics-based evaluation. Over the last 7 years this effort has yielded a better understanding of how systems can effectively accomplish such processing and how one can reliably benchmark their performance. In 2008, 77 teams (see Table 1) from various research organizations --- 24 from Asia, 39 from Europe, 13 from North America, and 1 from Australia --- participated in one or more of five tasks: high-level feature extraction, search (fully automatic, manually assisted, or interactive), pre-production video (rushes) summarization, copy detection, or surveillance event detection. The copy detection and surveillance event detection tasks are being run for the first time in TRECVID. This paper presents an overview of TRECVid in 2008

    The COST292 experimental framework for TRECVID 2007

    Get PDF
    In this paper, we give an overview of the four tasks submitted to TRECVID 2007 by COST292. In shot boundary (SB) detection task, four SB detectors have been developed and the results are merged using two merging algorithms. The framework developed for the high-level feature extraction task comprises four systems. The first system transforms a set of low-level descriptors into the semantic space using Latent Semantic Analysis and utilises neural networks for feature detection. The second system uses a Bayesian classifier trained with a “bag of subregions”. The third system uses a multi-modal classifier based on SVMs and several descriptors. The fourth system uses two image classifiers based on ant colony optimisation and particle swarm optimisation respectively. The system submitted to the search task is an interactive retrieval application combining retrieval functionalities in various modalities with a user interface supporting automatic and interactive search over all queries submitted. Finally, the rushes task submission is based on a video summarisation and browsing system comprising two different interest curve algorithms and three features

    So what can we actually do with content-based video retrieval?

    Get PDF
    In this talk I will give a roller-coaster survey of the state of the art in automatic video analysis, indexing, summarisation, search and browsing as demonstrated in the annual TRECVid benchmarking evaluation campaign. I will concentrate on content-based techniques for video management which form a complement to the dominant paradigm of metadata or tag-based video management and I will use example techniques to illustrate these

    Video-4-Video: using video for searching, classifying and summarising video

    Get PDF
    YouTube has meant that we are now becoming accustomed to searching for video clips, and finding them, for both work and leisure pursuits. But YouTube, like the Internet Archive, OpenVideo and almost everything other video library, doesn't use video to find video, it uses metadata, usually based on user generated content (UGC). But what if we don't know what we're looking for and the metadata doesn't help, or we have poor metadata or no UGC, can we use the video to find video ? Can we automatically derive semantic concepts directly from video which we can use for retrieval or summarisation ? Many dozens of research groups throughout the world work on the problems associated with content-based video search, content-based detection of semantic concepts, shot boundary detection, content-based summarisation and content-based event detection. In this presentation we give a summary of the achievements of almost a decade of research by the TRECVid community, including a report on performance of groups in different TRECVid tasks. We present the modus operandi of the annual TRECVid benchmarking, the problems associated with running an annual evaluation for nearly 100 research groups every year and an overview of the most successful approaches to each task

    The COST292 experimental framework for TRECVID 2007

    Get PDF
    In this paper, we give an overview of the four tasks submitted to TRECVID 2007 by COST292. In shot boundary (SB) detection task, four SB detectors have been developed and the results are merged using two merging algorithms. The framework developed for the high-level feature extraction task comprises four systems. The first system transforms a set of low-level descriptors into the semantic space using Latent Semantic Analysis and utilises neural networks for feature detection. The second system uses a Bayesian classifier trained with a "bag of subregions". The third system uses a multi-modal classifier based on SVMs and several descriptors. The fourth system uses two image classifiers based on ant colony optimisation and particle swarm optimisation respectively. The system submitted to the search task is an interactive retrieval application combining retrieval functionalities in various modalities with a user interface supporting automatic and interactive search over all queries submitted. Finally, the rushes task submission is based on a video summarisation and browsing system comprising two different interest curve algorithms and three features
    corecore