110,582 research outputs found

    Large scale evaluations of multimedia information retrieval: the TRECVid experience

    Get PDF
    Information Retrieval is a supporting technique which underpins a broad range of content-based applications including retrieval, filtering, summarisation, browsing, classification, clustering, automatic linking, and others. Multimedia information retrieval (MMIR) represents those applications when applied to multimedia information such as image, video, music, etc. In this presentation and extended abstract we are primarily concerned with MMIR as applied to information in digital video format. We begin with a brief overview of large scale evaluations of IR tasks in areas such as text, image and music, just to illustrate that this phenomenon is not just restricted to MMIR on video. The main contribution, however, is a set of pointers and a summarisation of the work done as part of TRECVid, the annual benchmarking exercise for video retrieval tasks

    Video retrieval using dialogue, keyframe similarity and video objects

    Get PDF
    There are several different approaches to video retrieval which vary in sophistication, and in the level of their deployment. Some are well-known, others are not yet within our reach for any kind of large volumes of video. In particular, object-based video retrieval, where an object from within a video is used for retrieval, is often particularly desirable from a searcher's perspective. In this paper we introduce Fischlar-Simpsons, a system providing retrieval from an archive of video using any combination of text searching, keyframe image matching, shot-level browsing, as well as object-based retrieval. The system is driven by user feedback and interaction rather than having the conventional search/browse/search metaphor and the purpose of the system is to explore how users can use detected objects in a shot as part of a retrieval task

    Video Logo Retrieval based on local Features

    Full text link
    Estimation of the frequency and duration of logos in videos is important and challenging in the advertisement industry as a way of estimating the impact of ad purchases. Since logos occupy only a small area in the videos, the popular methods of image retrieval could fail. This paper develops an algorithm called Video Logo Retrieval (VLR), which is an image-to-video retrieval algorithm based on the spatial distribution of local image descriptors that measure the distance between the query image (the logo) and a collection of video images. VLR uses local features to overcome the weakness of global feature-based models such as convolutional neural networks (CNN). Meanwhile, VLR is flexible and does not require training after setting some hyper-parameters. The performance of VLR is evaluated on two challenging open benchmark tasks (SoccerNet and Standford I2V), and compared with other state-of-the-art logo retrieval or detection algorithms. Overall, VLR shows significantly higher accuracy compared with the existing methods.Comment: Accepted by ICIP 20. Contact author: Bochen Guan ([email protected]

    Image/video indexing, retrieval and summarization based on eye movement

    Get PDF
    Information retrieval is one of the most fundamental functions in this era information. There is ambiguity in the scope of interest of users, regarding image/video retrieval, since an image usually contains one or more main objects in focus, as well as other objects which are considered as "background".This ambiguity often reduces the accuracy of image-based retrieval such as query by image example. Gaze detection is a promising approach to implicitly detect the focus of interest in an image or in video data to improve the performance of image retrieval, filtering and video summarization.In this paper, image/video indexing, retrieval and summarization based on gaze detection are described

    Using video objects and relevance feedback in video retrieval

    Get PDF
    Video retrieval is mostly based on using text from dialogue and this remains the most signi¯cant component, despite progress in other aspects. One problem with this is when a searcher wants to locate video based on what is appearing in the video rather than what is being spoken about. Alternatives such as automatically-detected features and image-based keyframe matching can be used, though these still need further improvement in quality. One other modality for video retrieval is based on segmenting objects from video and allowing end users to use these as part of querying. This uses similarity between query objects and objects from video, and in theory allows retrieval based on what is actually appearing on-screen. The main hurdles to greater use of this are the overhead of object segmentation on large amounts of video and the issue of whether we can actually achieve effective object-based retrieval. We describe a system to support object-based video retrieval where a user selects example video objects as part of the query. During a search a user builds up a set of these which are matched against objects previously segmented from a video library. This match is based on MPEG-7 Dominant Colour, Shape Compaction and Texture Browsing descriptors. We use a user-driven semi-automated segmentation process to segment the video archive which is very accurate and is faster than conventional video annotation

    Físchlár-TRECVid2004: Combined text- and image-based searching of video archives

    Get PDF
    The Fischlar-TRECVid-2004 system was developed for Dublin City University's participation in the 2004 TRECVid video information retrieval benchmarking activity. The system allows search and retrieval of video shots from over 60 hours of content. The shot retrieval engine employed is based on a combination of query text matched against spoken dialogue combined with image-image matching where a still image (sourced externally), or a keyframe (from within the video archive itself), is matched against all keyframes in the video archive. Three separate text retrieval engines are employed for closed caption text, automatic speech recognition and video OCR. Visual shot matching is primarily based on MPEG-7 low-level descriptors. The system supports relevance feedback at the shot level enabling augmentation and refinement using relevant shots located by the user. Two variants of the system were developed, one that supports both text- and image-based searching and one that supports image only search. A user evaluation experiment compared the use of the two systems. Results show that while the system combining text- and image-based searching achieves greater retrieval effectiveness, users make more varied and extensive queries with the image only based searching version

    Image Retrieval Using Circular Hidden Markov Models with a Garbage State

    Get PDF
    Shape-based image and video retrieval is an active research topic in multimedia information retrieval. It is well known that there are significant variations in shapes of the same category extracted from images and videos. In this paper, we propose to use circular hidden Markov models for shape recognition and image retrieval. In our approach, we use a garbage state to explicitly deal with shape mismatch caused by shape deformation and occlusion. We will propose a modi¯ed circular hidden Markov model (HMM)for shape-based image retrieval and then use circular HMMs with a garbage state to further improve the performance. To evaluate the proposed algorithms, we have conducted experiments using the database of the MPEG-7 Core Experiments Shape-1, Part B. The experiments show that our approaches are robust to shape deformations such as shape variations and occlusion. The performance of our approaches is comparable to that of the state-of-the-art shape-based image retrieval systems in terms of accuracy and speed

    Simulated testing of an adaptive multimedia information retrieval system

    Get PDF
    The Semantic Gap is considered to be a bottleneck in image and video retrieval. One way to increase the communication between user and system is to take advantage of the user's action with a system, e.g. to infer the relevance or otherwise of a video shot viewed by the user. In this paper we introduce a novel video retrieval system and propose a model of implicit information for interpreting the user's actions with the interface. The assumptions on which this model was created are then analysed in an experiment using simulated users based on relevance judgements to compare results of explicit and implicit retrieval cycles. Our model seems to enhance retrieval results. Results are presented and discussed in the final section
    corecore