621 research outputs found

    An unified framework based on p-norm for feature aggregation in content- based image retrieval

    Full text link
    Feature aggregation is a critical technique in content- based image retrieval systems that employ multiple visual features to characterize image content. In this paper, the p-norm is introduced to feature aggregation that provides a framework to unify various previous feature aggregation schemes such as linear combination, Euclidean distance, Boolean logic and decision fusion schemes in which previous schemes are instances. Some insights of the mechanism of how various aggregation schemes work are discussed through the effects of model parameters in the unified framework. Experiments show that performances vary over feature aggregation schemes that necessitates an unified framework in order to optimize the retrieval performance according to individual queries and user query concept. Revealing experimental results conducted with IAPR TC-12 ImageCLEF2006 benchmark collection that contains over 20,000 photographic images are presented and discussed.<br /

    Digital Image Access & Retrieval

    Get PDF
    The 33th Annual Clinic on Library Applications of Data Processing, held at the University of Illinois at Urbana-Champaign in March of 1996, addressed the theme of "Digital Image Access & Retrieval." The papers from this conference cover a wide range of topics concerning digital imaging technology for visual resource collections. Papers covered three general areas: (1) systems, planning, and implementation; (2) automatic and semi-automatic indexing; and (3) preservation with the bulk of the conference focusing on indexing and retrieval.published or submitted for publicatio

    Properties of series feature aggregation schemes

    Full text link
    Feature aggregation is a critical technique in content-based image retrieval (CBIR) that combines multiple feature distances to obtain image dissimilarity. Conventional parallel feature aggregation (PFA) schemes failed to effectively filter out the irrelevant images using individual visual features before ranking images in collection. Series feature aggregation (SFA) is a new scheme that aims to address this problem. This paper investigates three important properties of SFA that are significant for design of systems. They reveal the irrelevance of feature order and the convertibility of SFA and PFA as well as the superior performance of SFA. Furthermore, based on Gaussian kernel density estimator, the authors propose a new method to estimate the visual threshold, which is the key parameter of SFA. Experiments, conducted with IAPR TC-12 benchmark image collection (ImageCLEF2006) that contains over 20,000 photographic images and defined queries, have shown that SFA can outperform conventional PFA schemes.<br /

    Constrained Querying of Multimedia Databases

    Get PDF
    Copyright 2001 Society of Photo-Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic electronic or print reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited. http://dx.doi.org/10.1117/12.410976This paper investigates the problem of high-level querying of multimedia data by imposing arbitrary domain-specific constraints among multimedia objects. We argue that the current structured query mode, and the query-by-content model, are insufficient for many important applications, and we propose an alternative query framework that unifies and extends the previous two models. The proposed framework is based on the querying-by-concept paradigm, where the query is expressed simply in terms of concepts, regardless of the complexity of the underlying multimedia search engines. The query-by-concept paradigm was previously illustrated by the CAMEL system. The present paper builds upon and extends that work by adding arbitrary constraints and multiple levels of hierarchy in the concept representation model. We consider queries simply as descriptions of virtual data set, and that allows us to use the same unifying concept representation for query specification, as well as for data annotation purposes. We also identify some key issues and challenges presented by the new framework, and we outline possible approaches for overcoming them. In particular, we study the problems of concept representation, extraction, refinement, storage, and matching

    A new query dependent feature fusion approach for medical image retrieval based on one-class SVM

    Full text link
    With the development of the internet, medical images are now available in large numbers in online repositories, and there exists the need to retrieval the medical images in the content-based ways through automatically extracting visual information of the medical images. Since a single feature extracted from images just characterizes certain aspect of image content, multiple features are necessarily employed to improve the retrieval performance. Furthermore, a special feature is not equally important for different image queries since a special feature has different importance in reflecting the content of different images. However, most existed feature fusion methods for image retrieval only utilize query independent feature fusion or rely on explicit user weighting. In this paper, based on multiply query samples provided by the user, we present a novel query dependent feature fusion method for medical image retrieval based on one class support vector machine. The proposed query dependent feature fusion method for medical image retrieval can learn different feature fusion models for different image queries, and the learned feature fusion models can reflect the different importance of a special feature for different image queries. The experimental results on the IRMA medical image collection demonstrate that the proposed method can improve the retrieval performance effectively and can outperform existed feature fusion methods for image retrieval.<br /

    Medical image retrieval with query-dependent feature fusion based on one-class SVM

    Get PDF
    Due to the huge growth of the World Wide Web, medical images are now available in large numbers in online repositories, and there exists the need to retrieval the images through automatically extracting visual information of the medical images, which is commonly known as content-based image retrieval (CBIR). Since each feature extracted from images just characterizes certain aspect of image content, multiple features are necessarily employed to improve the retrieval performance. Meanwhile, experiments demonstrate that a special feature is not equally important for different image queries. Most of existed feature fusion methods for image retrieval only utilize query independent feature fusion or rely on explicit user weighting. In this paper, we present a novel query dependent feature fusion method for medical image retrieval based on one class support vector machine. Having considered that a special feature is not equally important for different image queries, the proposed query dependent feature fusion method can learn different feature fusion models for different image queries only based on multiply image samples provided by the user, and the learned feature fusion models can reflect the different importances of a special feature for different image queries. The experimental results on the IRMA medical image collection demonstrate that the proposed method can improve the retrieval performance effectively and can outperform existed feature fusion methods for image retrieval.<br /

    Semantic multimedia modelling & interpretation for search & retrieval

    Get PDF
    With the axiomatic revolutionary in the multimedia equip devices, culminated in the proverbial proliferation of the image and video data. Owing to this omnipresence and progression, these data become the part of our daily life. This devastating data production rate accompanies with a predicament of surpassing our potentials for acquiring this data. Perhaps one of the utmost prevailing problems of this digital era is an information plethora. Until now, progressions in image and video retrieval research reached restrained success owed to its interpretation of an image and video in terms of primitive features. Humans generally access multimedia assets in terms of semantic concepts. The retrieval of digital images and videos is impeded by the semantic gap. The semantic gap is the discrepancy between a user’s high-level interpretation of an image and the information that can be extracted from an image’s physical properties. Content- based image and video retrieval systems are explicitly assailable to the semantic gap due to their dependence on low-level visual features for describing image and content. The semantic gap can be narrowed by including high-level features. High-level descriptions of images and videos are more proficient of apprehending the semantic meaning of image and video content. It is generally understood that the problem of image and video retrieval is still far from being solved. This thesis proposes an approach for intelligent multimedia semantic extraction for search and retrieval. This thesis intends to bridge the gap between the visual features and semantics. This thesis proposes a Semantic query Interpreter for the images and the videos. The proposed Semantic Query Interpreter will select the pertinent terms from the user query and analyse it lexically and semantically. The proposed SQI reduces the semantic as well as the vocabulary gap between the users and the machine. This thesis also explored a novel ranking strategy for image search and retrieval. SemRank is the novel system that will incorporate the Semantic Intensity (SI) in exploring the semantic relevancy between the user query and the available data. The novel Semantic Intensity captures the concept dominancy factor of an image. As we are aware of the fact that the image is the combination of various concepts and among the list of concepts some of them are more dominant then the other. The SemRank will rank the retrieved images on the basis of Semantic Intensity. The investigations are made on the LabelMe image and LabelMe video dataset. Experiments show that the proposed approach is successful in bridging the semantic gap. The experiments reveal that our proposed system outperforms the traditional image retrieval systems

    Content based image retrieval using unclean positive examples

    Get PDF
    Conventional content-based image retrieval (CBIR) schemes employing relevance feedback may suffer from some problems in the practical applications. First, most ordinary users would like to complete their search in a single interaction especially on the web. Second, it is time consuming and difficult to label a lot of negative examples with sufficient variety. Third, ordinary users may introduce some noisy examples into the query. This correspondence explores solutions to a new issue that image retrieval using unclean positive examples. In the proposed scheme, multiple feature distances are combined to obtain image similarity using classification technology. To handle the noisy positive examples, a new two-step strategy is proposed by incorporating the methods of data cleaning and noise tolerant classifier. The extensive experiments carried out on two different real image collections validate the effectiveness of the proposed scheme.<br /
    • …
    corecore