1,451 research outputs found

    Measuring the influence of concept detection on video retrieval

    Get PDF
    There is an increasing emphasis on including semantic concept detection as part of video retrieval. This represents a modality for retrieval quite different from metadata-based and keyframe similarity-based approaches. One of the premises on which the success of this is based, is that good quality detection is available in order to guarantee retrieval quality. But how good does the feature detection actually need to be? Is it possible to achieve good retrieval quality, even with poor quality concept detection and if so then what is the 'tipping point' below which detection accuracy proves not to be beneficial? In this paper we explore this question using a collection of rushes video where we artificially vary the quality of detection of semantic features and we study the impact on the resulting retrieval. Our results show that the impact of improving or degrading performance of concept detectors is not directly reflected as retrieval performance and this raises interesting questions about how accurate concept detection really needs to be

    TRECVID 2007 - Overview

    Get PDF

    The microbial weathering of uranyl phosphate minerals.

    Get PDF

    Symbiosis between the TRECVid benchmark and video libraries at the Netherlands Institute for Sound and Vision

    Get PDF
    Audiovisual archives are investing in large-scale digitisation efforts of their analogue holdings and, in parallel, ingesting an ever-increasing amount of born- digital files in their digital storage facilities. Digitisation opens up new access paradigms and boosted re-use of audiovisual content. Query-log analyses show the shortcomings of manual annotation, therefore archives are complementing these annotations by developing novel search engines that automatically extract information from both audio and the visual tracks. Over the past few years, the TRECVid benchmark has developed a novel relationship with the Netherlands Institute of Sound and Vision (NISV) which goes beyond the NISV just providing data and use cases to TRECVid. Prototype and demonstrator systems developed as part of TRECVid are set to become a key driver in improving the quality of search engines at the NISV and will ultimately help other audiovisual archives to offer more efficient and more fine-grained access to their collections. This paper reports the experiences of NISV in leveraging the activities of the TRECVid benchmark

    Creating a web-scale video collection for research

    Get PDF
    This paper begins by considering a number of important design questions for a web-scale, widely available, multimedia test collection intended to support long-term scientific evaluation and comparison of content-based video analysis and exploitation systems. Such exploitation systems would include the kinds of functionality already explored within the annual TRECVid benchmarking activity such as search, semantic concept detection, and automatic summarisation. We then report on our progress in creating such a multimedia collection which we believe to be web scale and which will support a next generation of benchmarking activities for content-based video operations, and we report on our plans for how we intend to put this collection, the IACC.1 collection, to use

    Neurological modeling of what experts vs. non-experts find interesting

    Get PDF
    The P3 and related ERP's have a long history of use to identify stimulus events in subjects as part of oddball-style experiments. In this work we describe the ongoing development of oddball style experiments which attempt to capture what a subject finds of interest or curious, when presented with a set of visual stimuli i.e. images. This joint work between Dublin City University (DCU) and the European Space Agency's Advanced Concepts Team (ESA ACT) is motivated by the challenges of autonomous space exploration where the time lag for sending data back to earth for analysis and then communicating an action or decision back to the spacecraft means that decision-making is slow. Also, when extraterrestrial sensors capture data, the determination of what data to send back to earth is driven by an expertly devised rule set, that is scientists need to determine apriori what will be of interest. This cannot adapt to novel or unexpected data that a scientist may find curious. Our work is attempting to determine if it is possible to capture what a scientist (subject) finds of interest (curious) in a stream of image data through EEG measurement. One of the our challenges is to determine the difference between an expert and a lay subject response to stimulus. To investigate the theorized difference, we use a set of lifelog images as our dataset. Lifelog images are first person images taken by a small wearable camera which continuously records images whilst it is worn. We have devised two key experiments for use with this data and two classes of subjects. Our subjects are a person who has worn the personal camera, from which our collection of lifelog images is taken and who becomes our expert, and the remaining subjects are people who have no association with the captured images. Our first experiment is a traditional oddball experiment where the oddballs are people having coffee, and can be thought of as a directed information seeking task. The second experiment is to present a stream of lifelog images to the subjects and record which images cause a stimulus response. Once the data from these experiments has been captured our task is to compare the responses between the expert and lay subject groups, to determine if there are any commonalities between these groups or any distinct differences. If the latter outcome is the case the objective is then to investigate methods for capturing properties of images which cause an expert to be interested in a presented image. Further novelty is added to our work by the fact we are using entry-level off-the-shelf EEG devices, consisting of 4 nodes with a sampling rate of 255Hz
    corecore