258,524 research outputs found
An adaptive technique for content-based image retrieval
We discuss an adaptive approach towards Content-Based Image Retrieval. It is based on the Ostensive Model of developing information needs—a special kind of relevance feedback model that learns from implicit user feedback and adds a temporal notion to relevance. The ostensive approach supports content-assisted browsing through visualising the interaction by adding user-selected images to a browsing path, which ends with a set of system recommendations. The suggestions are based on an adaptive query learning scheme, in which the query is learnt from previously selected images. Our approach is an adaptation of the original Ostensive Model based on textual features only, to include content-based features to characterise images. In the proposed scheme textual and colour features are combined using the Dempster-Shafer theory of evidence combination. Results from a user-centred, work-task oriented evaluation show that the ostensive interface is preferred over a traditional interface with manual query facilities. This is due to its ability to adapt to the user's need, its intuitiveness and the fluid way in which it operates. Studying and comparing the nature of the underlying information need, it emerges that our approach elicits changes in the user's need based on the interaction, and is successful in adapting the retrieval to match the changes. In addition, a preliminary study of the retrieval performance of the ostensive relevance feedback scheme shows that it can outperform a standard relevance feedback strategy in terms of image recall in category search
RBIR Based on Signature Graph
This paper approaches the image retrieval system on the base of visual
features local region RBIR (region-based image retrieval). First of all, the
paper presents a method for extracting the interest points based on
Harris-Laplace to create the feature region of the image. Next, in order to
reduce the storage space and speed up query image, the paper builds the binary
signature structure to describe the visual content of image. Based on the
image's binary signature, the paper builds the SG (signature graph) to classify
and store image's binary signatures. Since then, the paper builds the image
retrieval algorithm on SG through the similar measure EMD (earth mover's
distance) between the image's binary signatures. Last but not least, the paper
gives an image retrieval model RBIR, experiments and assesses the image
retrieval method on Corel image database over 10,000 images.Comment: 4 pages, 4 figure
Content Based Image Retrieval System Using NOHIS-tree
Content-based image retrieval (CBIR) has been one of the most important
research areas in computer vision. It is a widely used method for searching
images in huge databases. In this paper we present a CBIR system called
NOHIS-Search. The system is based on the indexing technique NOHIS-tree. The two
phases of the system are described and the performance of the system is
illustrated with the image database ImagEval. NOHIS-Search system was compared
to other two CBIR systems; the first that using PDDP indexing algorithm and the
second system is that using the sequential search. Results show that
NOHIS-Search system outperforms the two other systems.Comment: 6 pages, 10th International Conference on Advances in Mobile
Computing & Multimedia (MoMM2012
Automatic Query Image Disambiguation for Content-Based Image Retrieval
Query images presented to content-based image retrieval systems often have
various different interpretations, making it difficult to identify the search
objective pursued by the user. We propose a technique for overcoming this
ambiguity, while keeping the amount of required user interaction at a minimum.
To achieve this, the neighborhood of the query image is divided into coherent
clusters from which the user may choose the relevant ones. A novel feedback
integration technique is then employed to re-rank the entire database with
regard to both the user feedback and the original query. We evaluate our
approach on the publicly available MIRFLICKR-25K dataset, where it leads to a
relative improvement of average precision by 23% over the baseline retrieval,
which does not distinguish between different image senses.Comment: VISAPP 2018 paper, 8 pages, 5 figures. Source code:
https://github.com/cvjena/ai
Information-Theoretic Active Learning for Content-Based Image Retrieval
We propose Information-Theoretic Active Learning (ITAL), a novel batch-mode
active learning method for binary classification, and apply it for acquiring
meaningful user feedback in the context of content-based image retrieval.
Instead of combining different heuristics such as uncertainty, diversity, or
density, our method is based on maximizing the mutual information between the
predicted relevance of the images and the expected user feedback regarding the
selected batch. We propose suitable approximations to this computationally
demanding problem and also integrate an explicit model of user behavior that
accounts for possible incorrect labels and unnameable instances. Furthermore,
our approach does not only take the structure of the data but also the expected
model output change caused by the user feedback into account. In contrast to
other methods, ITAL turns out to be highly flexible and provides
state-of-the-art performance across various datasets, such as MIRFLICKR and
ImageNet.Comment: GCPR 2018 paper (14 pages text + 2 pages references + 6 pages
appendix
- …
