11,348 research outputs found

    The uncertain representation ranking framework for concept-based video retrieval

    Get PDF
    Concept based video retrieval often relies on imperfect and uncertain concept detectors. We propose a general ranking framework to define effective and robust ranking functions, through explicitly addressing detector uncertainty. It can cope with multiple concept-based representations per video segment and it allows the re-use of effective text retrieval functions which are defined on similar representations. The final ranking status value is a weighted combination of two components: the expected score of the possible scores, which represents the risk-neutral choice, and the scoresā€™ standard deviation, which represents the risk or opportunity that the score for the actual representation is higher. The framework consistently improves the search performance in the shot retrieval task and the segment retrieval task over several baselines in five TRECVid collections and two collections which use simulated detectors of varying performance

    UTwente does Brave New Tasks for MediaEval 2012: Searching and Hyperlinking

    Get PDF
    In this paper we report our experiments and results for the brave new searching and hyperlinking tasks for the MediaEval Benchmark Initiative 2012. The searching task involves nding target video segments based on a short natural language sentence query and the hyperlinking task involves nding links from the target video segments to other related video segments in the collection using a set of anchor segments in the videos that correspond to the textual search queries. To nd the starting points in the video, we only used speech transcripts and metadata as evidence source, however, other visual features (for e.g., faces, shots and keyframes) might also aect results for a query. We indexed speech transcripts and metadata, furthermore, the speech transcripts were indexed at speech segment level and at sentence level to improve the likelihood of nding jump-in-points. For linking video segments, we computed k-nearest neighbours of video segments using euclidean distance

    DCU at MediaEval 2011: Rich Speech Retrieval (RSR)

    Get PDF
    We describe our runs and results for the Rich Speech Re- trieval (RSR) Task at MediaEval 2011. Our runs examine the use of alternative segmentation methods on the provided ASR transcripts to locate the beginning of the topic, assum- ing that this will capture or get close to the starting point of the relevant segment; combination of various types of queries and weighting of metadata to move the relevant segment higher in the ranked list; and dierent ASR transcripts to compare the in uence of the ASR transcripts quality. Our results show that newer versions of the transcripts and use of metadata produce better results on average. So far we have not used information about the illocutionary act type corresponding to each query, but analysis of the retrieval results shows dierence in behaviour for queries associated with certatin classes of act

    Searching Spontaneous Conversational Speech:Proceedings of ACM SIGIR Workshop (SSCS2008)

    Get PDF

    K-Space at TRECVid 2007

    Get PDF
    In this paper we describe K-Space participation in TRECVid 2007. K-Space participated in two tasks, high-level feature extraction and interactive search. We present our approaches for each of these activities and provide a brief analysis of our results. Our high-level feature submission utilized multi-modal low-level features which included visual, audio and temporal elements. Specific concept detectors (such as Face detectors) developed by K-Space partners were also used. We experimented with different machine learning approaches including logistic regression and support vector machines (SVM). Finally we also experimented with both early and late fusion for feature combination. This year we also participated in interactive search, submitting 6 runs. We developed two interfaces which both utilized the same retrieval functionality. Our objective was to measure the effect of context, which was supported to different degrees in each interface, on user performance. The first of the two systems was a ā€˜shotā€™ based interface, where the results from a query were presented as a ranked list of shots. The second interface was ā€˜broadcastā€™ based, where results were presented as a ranked list of broadcasts. Both systems made use of the outputs of our high-level feature submission as well as low-level visual features

    AXES at TRECVID 2012: KIS, INS, and MED

    Get PDF
    The AXES project participated in the interactive instance search task (INS), the known-item search task (KIS), and the multimedia event detection task (MED) for TRECVid 2012. As in our TRECVid 2011 system, we used nearly identical search systems and user interfaces for both INS and KIS. Our interactive INS and KIS systems focused this year on using classifiers trained at query time with positive examples collected from external search engines. Participants in our KIS experiments were media professionals from the BBC; our INS experiments were carried out by students and researchers at Dublin City University. We performed comparatively well in both experiments. Our best KIS run found 13 of the 25 topics, and our best INS runs outperformed all other submitted runs in terms of P@100. For MED, the system presented was based on a minimal number of low-level descriptors, which we chose to be as large as computationally feasible. These descriptors are aggregated to produce high-dimensional video-level signatures, which are used to train a set of linear classifiers. Our MED system achieved the second-best score of all submitted runs in the main track, and best score in the ad-hoc track, suggesting that a simple system based on state-of-the-art low-level descriptors can give relatively high performance. This paper describes in detail our KIS, INS, and MED systems and the results and findings of our experiments
    • ā€¦
    corecore