20 research outputs found

    Content-based video retrieval: three example systems from TRECVid

    Get PDF
    The growth in available online video material over the internet is generally combined with user-assigned tags or content description, which is the mechanism by which we then access such video. However, user-assigned tags have limitations for retrieval and often we want access where the content of the video itself is directly matched against a user’s query rather than against some manually assigned surrogate tag. Content-based video retrieval techniques are not yet scalable enough to allow interactive searching on internet-scale, but the techniques are proving robust and effective for smaller collections. In this paper we show 3 exemplar systems which demonstrate the state of the art in interactive, content-based retrieval of video shots, and these three are just three of the more than 20 systems developed for the 2007 iteration of the annual TRECVid benchmarking activity. The contribution of our paper is to show that retrieving from video using content-based methods is now viable, that it works, and that there are many systems which now do this, such as the three outlined herein. These systems, and others can provide effective search on hundreds of hours of video content and are samples of the kind of content-based search functionality we can expect to see on larger video archives when issues of scale are addressed

    ABSTRACT Query On Demand Video Browsing

    No full text
    This paper describes a novel method for browsing a large collection of news video by linking various forms of related video fragments together as threads. Each thread contains a sequence of shots with high feature-based similarity. Two interfaces are designed which use threads as the basis for browsing. One interface shows a minimal set of threads, and the other as many as possible. Both interfaces are evaluatedintheTRECVIDinteractiveretrievaltask,where they ranked among the best interactive retrieval systems currently available. The results indicate that the use of threads in interactive video search is very beneficial. We have found that in general the query result and the timeline are the most important threads. However, having several additional threads allow a user to find unique results which cannot easily be found by using query results and time alone

    Browsing Video Along Multiple Threads

    No full text
    Abstract—This paper describes a novel method for browsing a large video collection. It links various forms of related video fragments together as threads. These threads are based on query results, the timeline as well as visual and semantic similarity. We design two interfaces which use threads as the basis for browsing. One interface shows a minimal set of threads, and the other as many as fit on the screen. To evaluate both interfaces we perform a regular user study, a study based on user simulation, and we participated in the interactive video retrieval task of the TRECVID benchmark. The results indicate that the use of threads in interactive video retrieval is beneficial. Furthermore, we found that in general the query result and the timeline are the most important threads, but having several additional threads improves the performance as it encourages people to explore new dimensions. Index Terms—Conceptual similarity, information visualization, interactive search, multidimensional browsing, semantic threads

    Active Bucket Categorization for High Recall Video Retrieval

    Get PDF
    Abstract—There are large amounts of digital video available. High recall retrieval of these requires to go beyond the ranked results, the common target in high precision retrieval. To aid high recall retrieval, we propose Active Bucket Categorization, a multi-category interactive learning strategy which extends MediaTable [1], our multimedia categorization tool. MediaTable allows users to place video shots into buckets: user assigned subsets of the collection. Our Active Bucket Categorization approach augments this by unobtrusively expanding these buckets with related footage from the whole collection. In this paper, we propose an architecture for active bucket based video retrieval, evaluate two different learning strategies, and show its use in video retrieval with an evaluation using 3 groups of non-expert users. One baseline group uses only the categorization features of MediaTable such as sorting and filtering on concepts and fast grid preview, but no online learning mechanisms. One group uses on-demand passive buckets. The last group uses fully automatic active buckets which autonomously add content to buckets. Results indicate a significant increase in the number of relevant items found for the two groups of users using bucket expansions, yielding the best results with fully automatic bucket expansions, thereby aiding high recall video retrieval significantly. Index Terms—video retrieval, multi class categorization, interactive video retrieval, active learning, relevance feedback, user evaluation I

    xTAS and ThemeStreams Extendable Text Analysis Service and its Usage in a Topic Monitoring Tool

    Get PDF
    xTAS is an extendable multi-user text analysis service for large scale multi-lingual document analysis developed at the University of Amsterdam. It can process large amounts of documents in a timely manner through a web interface that can be used by multiple users at once. In this demonstration paper we present recent additions which include semanticization, on the fly TF-IDF model generation and on the fly co-occurrence metrics. Furthermore, we demonstrate ThemeStreams, a novel topic monitoring tool built on top of xTAS

    Themestreams: Visualizing the stream of themes discussed in politics

    No full text
    The political landscape is fluid. Discussions are always ongoing and new “hot topics ” continue to appear in the headlines. But what made people start talking about that topic? And who started it? Because of the speed at which discussions sometimes take place this can be difficult to track down. We describe ThemeStreams: a demonstrator that maps political discussions to themes and influencers and illustrate how this mapping is used in an interactive visualization that shows us which themes are being discussed, and that helps us answer the question “Who put this issue on the map?" in streams of political data
    corecore