33,350 research outputs found

    Towards Visual Vocabulary and Ontology-based Image Retrieval System

    Get PDF
    International audienceSeveral approaches have been introduced in image retrieval field. However, many limitations, such as the semantic gap, still exist. As our motivation is to improve image retrieval accuracy, this paper presents an image retrieval system based on visual vocabulary and ontology. We propose, for every query image, to build visual vocabulary and ontology based on images annotations. Image retrieval process is performed by integrating both visual and semantic features and similarities

    Multi modal multi-semantic image retrieval

    Get PDF
    PhDThe rapid growth in the volume of visual information, e.g. image, and video can overwhelm users’ ability to find and access the specific visual information of interest to them. In recent years, ontology knowledge-based (KB) image information retrieval techniques have been adopted into in order to attempt to extract knowledge from these images, enhancing the retrieval performance. A KB framework is presented to promote semi-automatic annotation and semantic image retrieval using multimodal cues (visual features and text captions). In addition, a hierarchical structure for the KB allows metadata to be shared that supports multi-semantics (polysemy) for concepts. The framework builds up an effective knowledge base pertaining to a domain specific image collection, e.g. sports, and is able to disambiguate and assign high level semantics to ‘unannotated’ images. Local feature analysis of visual content, namely using Scale Invariant Feature Transform (SIFT) descriptors, have been deployed in the ‘Bag of Visual Words’ model (BVW) as an effective method to represent visual content information and to enhance its classification and retrieval. Local features are more useful than global features, e.g. colour, shape or texture, as they are invariant to image scale, orientation and camera angle. An innovative approach is proposed for the representation, annotation and retrieval of visual content using a hybrid technique based upon the use of an unstructured visual word and upon a (structured) hierarchical ontology KB model. The structural model facilitates the disambiguation of unstructured visual words and a more effective classification of visual content, compared to a vector space model, through exploiting local conceptual structures and their relationships. The key contributions of this framework in using local features for image representation include: first, a method to generate visual words using the semantic local adaptive clustering (SLAC) algorithm which takes term weight and spatial locations of keypoints into account. Consequently, the semantic information is preserved. Second a technique is used to detect the domain specific ‘non-informative visual words’ which are ineffective at representing the content of visual data and degrade its categorisation ability. Third, a method to combine an ontology model with xi a visual word model to resolve synonym (visual heterogeneity) and polysemy problems, is proposed. The experimental results show that this approach can discover semantically meaningful visual content descriptions and recognise specific events, e.g., sports events, depicted in images efficiently. Since discovering the semantics of an image is an extremely challenging problem, one promising approach to enhance visual content interpretation is to use any associated textual information that accompanies an image, as a cue to predict the meaning of an image, by transforming this textual information into a structured annotation for an image e.g. using XML, RDF, OWL or MPEG-7. Although, text and image are distinct types of information representation and modality, there are some strong, invariant, implicit, connections between images and any accompanying text information. Semantic analysis of image captions can be used by image retrieval systems to retrieve selected images more precisely. To do this, a Natural Language Processing (NLP) is exploited firstly in order to extract concepts from image captions. Next, an ontology-based knowledge model is deployed in order to resolve natural language ambiguities. To deal with the accompanying text information, two methods to extract knowledge from textual information have been proposed. First, metadata can be extracted automatically from text captions and restructured with respect to a semantic model. Second, the use of LSI in relation to a domain-specific ontology-based knowledge model enables the combined framework to tolerate ambiguities and variations (incompleteness) of metadata. The use of the ontology-based knowledge model allows the system to find indirectly relevant concepts in image captions and thus leverage these to represent the semantics of images at a higher level. Experimental results show that the proposed framework significantly enhances image retrieval and leads to narrowing of the semantic gap between lower level machinederived and higher level human-understandable conceptualisation

    Enhancing the performance of multi-modality ontology semantic image retrieval using object properties filter

    Get PDF
    Semantic technology such as ontology provides the possible approach to narrow down the semantic gap issue in image retrieval between low-level visual features and high-level human semantic.The semantic gap occurs when there is a disagreement between the information that is extracted from visual data and the text description.In this paper, we applied ontology to bridge the semantic gap by developing a prototype multi-modality ontology image retrieval with the enhancement of retrieval mechanism by using the object properties filter.The results demonstrated that, based on precision measurement, our proposed approach delivered better results compared to the approach without using object properties filter

    A practical image retrieval framework for tourism industry

    Get PDF
    Image Retrieval (IR) is one of the most exciting and fastest growing research domains in the field of multimedia technology. And in the industrial ecosystems, images of products, activities, marketing materials etc are needed to be managed and fetched in an efficient way to support and facilitate business processes. Current techniques for IR including keyword based, content based and ontology based image retrieval have several unsolved issues. We promote the ontology based IR approach and focus on two issues: firstly, the difficulty in constructing ontologies of images for those industries without ontology professionals, and, secondly, none of the existing approaches consider image content ranking in search results. In this paper, we propose a practical framework to tackle these issues by introducing an Abstract Image Ontology that serves as a blueprint of image ontologies and by incorporating a Concept Instance Ranking Scheme to allow ranking of each of the contents expressed in images, thus providing extra information for IR process. An application scenario in the tourism industry area is also presented

    Utilising semantic technologies for intelligent indexing and retrieval of digital images

    Get PDF
    The proliferation of digital media has led to a huge interest in classifying and indexing media objects for generic search and usage. In particular, we are witnessing colossal growth in digital image repositories that are difficult to navigate using free-text search mechanisms, which often return inaccurate matches as they in principle rely on statistical analysis of query keyword recurrence in the image annotation or surrounding text. In this paper we present a semantically-enabled image annotation and retrieval engine that is designed to satisfy the requirements of the commercial image collections market in terms of both accuracy and efficiency of the retrieval process. Our search engine relies on methodically structured ontologies for image annotation, thus allowing for more intelligent reasoning about the image content and subsequently obtaining a more accurate set of results and a richer set of alternatives matchmaking the original query. We also show how our well-analysed and designed domain ontology contributes to the implicit expansion of user queries as well as the exploitation of lexical databases for explicit semantic-based query expansion

    Multi-modality ontology semantic image retrieval with user interaction model / Mohd Suffian Sulaiman

    Get PDF
    Interest in the production and potential of digital images has increased greatly in the past decade. The extensive use of digital technologies produces millions of digital images daily. However, the capabilities of technologies equipment manifest the difficulty and challenge for the user to retrieve or search the visual information especially in a large and varieties of a collection. The issues of time consuming for tagging the image, often subject to individual interpretation and lack of ability for a computer to understand the semantic high-level human understanding of image become the former approaches unable to provide an effective solution to this problem. In addressing this problem, this research explores the techniques developed to combine textual description with visual features to form as multi-modality ontology. This semantic technology is chosen due to the ability to mine, interpret and organise the knowledge. Ontology can be seen as a knowledge base that can be used to improve the image retrieval process with the aim of reducing the semantic gap between visual features and high-level semantics. To achieve this aim, multi-modality ontology semantic image retrieval model is proposed. Four main components comprising resource identification, information extraction, knowledge-based construction and image retrieval mechanism are the main tasks need to be implemented in this model. In order to enhance the retrieval performance, the ontology is combined with user interaction by exploiting the ontology relationship. This approach is proposed based on an adaptation from a part of relevance feedback concept. To realise this approach, the semantic image retrieval prototype is developed based on the existing foundation algorithm and customised to provide the ability for user engagement in order to enhance the retrieval performance. To measure the retrieval performance, the ontology evaluation needs to be done first. The correctness of ontology content between the referred corpus and the notation of the ontology is important to make sure the reliability of the proposed approach. Twenty samples of natural language queries are used to test the retrieval performance through the generating of the SPARQL query automatically to access the metadata in the ontology. The graphical user interface is designed to display the image retrieval results. Based on the results, the retrieval performance is measured quantitatively by using precision, recall, accuracy and F-measure techniques. An experiment shows that the proposed model has an average accuracy 0.977, precision 0.797, recall 1.000 and F-measure 0.887 compared to text-based image retrieval, 0.666 (accuracy), 0.160 (precision), 0.950 (recall) and 0.275 (F-measure); textual ontology, 0.937 (accuracy), 0.395 (precision), 0.900 (recall) and 0.549 (F-measure); visual ontology, 0.984 (accuracy), 0.229 (precision), 0.300 (recall) and 0.260 (F-measure); multi-modality ontology, 0.920 (accuracy), 0.398 (precision), 1.000 (recall) and 0.569 (F-measure). In conclusion, results of the proposed model demonstrated better performance in order to reduce the semantic gap, enhance the semantic image retrieval performance and provide the easy way for the user to retrieve the herbal medicinal plant images

    How a General-Purpose Commonsense Ontology can Improve Performance of Learning-Based Image Retrieval

    Full text link
    The knowledge representation community has built general-purpose ontologies which contain large amounts of commonsense knowledge over relevant aspects of the world, including useful visual information, e.g.: "a ball is used by a football player", "a tennis player is located at a tennis court". Current state-of-the-art approaches for visual recognition do not exploit these rule-based knowledge sources. Instead, they learn recognition models directly from training examples. In this paper, we study how general-purpose ontologies---specifically, MIT's ConceptNet ontology---can improve the performance of state-of-the-art vision systems. As a testbed, we tackle the problem of sentence-based image retrieval. Our retrieval approach incorporates knowledge from ConceptNet on top of a large pool of object detectors derived from a deep learning technique. In our experiments, we show that ConceptNet can improve performance on a common benchmark dataset. Key to our performance is the use of the ESPGAME dataset to select visually relevant relations from ConceptNet. Consequently, a main conclusion of this work is that general-purpose commonsense ontologies improve performance on visual reasoning tasks when properly filtered to select meaningful visual relations.Comment: Accepted in IJCAI-1
    • …
    corecore