3 research outputs found

    Interactive Video Retrieval in the Age of Deep Learning - Detailed Evaluation of VBS 2019

    No full text
    Despite the fact that automatic content analysis has made remarkable progress over the last decade - mainly due to significant advances in machine learning - interactive video retrieval is still a very challenging problem, with an increasing relevance in practical applications. The Video Browser Showdown (VBS) is an annual evaluation competition that pushes the limits of interactive video retrieval with state-of-the-art tools, tasks, data, and evaluation metrics. In this paper, we analyse the results and outcome of the 8th iteration of the VBS in detail. We first give an overview of the novel and considerably larger V3C1 dataset and the tasks that were performed during VBS 2019. We then go on to describe the search systems of the six international teams in terms of features and performance. And finally, we perform an in-depth analysis of the per-team success ratio and relate this to the search strategies that were applied, the most popular features, and problems that were experienced. A large part of this analysis was conducted based on logs that were collected during the competition itself. This analysis gives further insights into the typical search behavior and differences between expert and novice users. Our evaluation shows that textual search and content browsing are the most important aspects in terms of logged user interactions. Furthermore, we observe a trend towards deep learning based features, especially in the form of labels generated by artificial neural networks. But nevertheless, for some tasks, very specific content-based search features are still being used. We expect these findings to contribute to future improvements of interactive video search systems

    On the User-centric Comparative Remote Evaluation of Interactive Video Search Systems

    Get PDF
    In the research of video retrieval systems, comparative assessments during dedicated retrieval competitions provide priceless insights into the performance of individual systems. The scope and depth of such evaluations is unfortunately hard to improve, due to the limitations by the set-up costs, logistics and organization complexity of large events. We show that this easily impairs the statistical significance of the collected results, and the reproducibility of the competition outcomes. In this paper, we present a methodology for remote comparative evaluations of content-based video retrieval systems and demonstrate that such evaluations scale-up to sizes that reliably produce statistically robust results, and propose additional measures that increase the replicability of the experiment. The proposed remote evaluation methodology forms a major contribution towards open science in interactive retrieval benchmarks. At the same time, the detailed evaluation reports form an interesting source of new observations about many subtle, previously inaccessible aspects of video retrieval
    corecore