Formulating queries for collecting training examples in visual concept classification

Abstract

Video content can be automatically analysed and indexed using trained classifiers which map low-level features to semantic concepts. Such classifiers need training data consisting of sets of images which contain such concepts and recently it has been discovered that such training data can be located using text-based search to image databases on the internet. Formulating the text queries which locate these training images is the challenge we address here. In this paper we present preliminary results on TRECVid data of concept classification using automatically crawled images as training data and we compare the results with those obtained from manually annotated training sets

Similar works

This paper was published in DCU Online Research Access Service.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.