Image tagging, also known as image annotation and image conception detection, has been extensively studied in the literature. However, most existing approaches can hardly achieve satisfactory performance owing to the deficiency and unreliability of the manually-labeled training data. In this paper, we propose a new image tagging scheme, termed social assisted media tagging (SAMT), which leverages the abundant user-generated images and the associated tags as the "social assistance" to learn the classifiers. We focus on addressing the following major challenges: (a) the noisy tags associated to the web images; and (b) the desirable robustness of the tagging model. We present a joint image tagging framework which simultaneously refines the erroneous tags of the web images as well as learns the reliable image classifiers. In particular, we devise a novel tag refinement module for identifying and eliminating the noisy tags by substantially exploring and preserving the low-rank nature of the tag matrix and the structured sparse property of the tag errors. We develop a robust image tagging module based on the ℓ-norm for learning the reliable image classifiers. The correlation of the two modules is well explored within the joint framework to reinforce each other. Extensive experiments on two real-world social image databases illustrate the superiority of the proposed approach as compared to the existing methods. Copyright 2014 ACM
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.