5,582 research outputs found

    Enhancing the performance of multi-modality ontology semantic image retrieval using object properties filter

    Get PDF
    Semantic technology such as ontology provides the possible approach to narrow down the semantic gap issue in image retrieval between low-level visual features and high-level human semantic.The semantic gap occurs when there is a disagreement between the information that is extracted from visual data and the text description.In this paper, we applied ontology to bridge the semantic gap by developing a prototype multi-modality ontology image retrieval with the enhancement of retrieval mechanism by using the object properties filter.The results demonstrated that, based on precision measurement, our proposed approach delivered better results compared to the approach without using object properties filter

    An object properties filter for multi-modality ontology semantic image retrieval

    Get PDF
    Ontology is a semantic technology that provides the possible approach to bridge the issue on semantic gap in image retrieval between low-level visual features and high-level human semantic.The semantic gap occurs when there is a discrepancy between the information that is extracted from visual data and the text description.In other words, there is a difference between the computational representation in machine and human natural language.In this paper, an ontology has been utilized to reduce the semantic gap by developing a multi-modality ontology image retrieval with the enhancement of a retrieval mechanism by using the object properties filter. To achieve this, a multi-modality ontology semantic image framework was proposed, comprising of four main components which were resource identification, information extraction, knowledge-based construction and retrieval mechanism.A new approach, namely object properties filter is proposed by customizing the semantic image retrieval algorithm and the graphical user interface to facilitate the user to engage with the machine i.e. computers, in order to enhance the retrieval performance.The experiment results showed that the proposed approach delivered better results compared to the approach that did not use the object properties filter based on probability precision measurement

    Multi-modality ontology semantic image retrieval with user interaction model / Mohd Suffian Sulaiman

    Get PDF
    Interest in the production and potential of digital images has increased greatly in the past decade. The extensive use of digital technologies produces millions of digital images daily. However, the capabilities of technologies equipment manifest the difficulty and challenge for the user to retrieve or search the visual information especially in a large and varieties of a collection. The issues of time consuming for tagging the image, often subject to individual interpretation and lack of ability for a computer to understand the semantic high-level human understanding of image become the former approaches unable to provide an effective solution to this problem. In addressing this problem, this research explores the techniques developed to combine textual description with visual features to form as multi-modality ontology. This semantic technology is chosen due to the ability to mine, interpret and organise the knowledge. Ontology can be seen as a knowledge base that can be used to improve the image retrieval process with the aim of reducing the semantic gap between visual features and high-level semantics. To achieve this aim, multi-modality ontology semantic image retrieval model is proposed. Four main components comprising resource identification, information extraction, knowledge-based construction and image retrieval mechanism are the main tasks need to be implemented in this model. In order to enhance the retrieval performance, the ontology is combined with user interaction by exploiting the ontology relationship. This approach is proposed based on an adaptation from a part of relevance feedback concept. To realise this approach, the semantic image retrieval prototype is developed based on the existing foundation algorithm and customised to provide the ability for user engagement in order to enhance the retrieval performance. To measure the retrieval performance, the ontology evaluation needs to be done first. The correctness of ontology content between the referred corpus and the notation of the ontology is important to make sure the reliability of the proposed approach. Twenty samples of natural language queries are used to test the retrieval performance through the generating of the SPARQL query automatically to access the metadata in the ontology. The graphical user interface is designed to display the image retrieval results. Based on the results, the retrieval performance is measured quantitatively by using precision, recall, accuracy and F-measure techniques. An experiment shows that the proposed model has an average accuracy 0.977, precision 0.797, recall 1.000 and F-measure 0.887 compared to text-based image retrieval, 0.666 (accuracy), 0.160 (precision), 0.950 (recall) and 0.275 (F-measure); textual ontology, 0.937 (accuracy), 0.395 (precision), 0.900 (recall) and 0.549 (F-measure); visual ontology, 0.984 (accuracy), 0.229 (precision), 0.300 (recall) and 0.260 (F-measure); multi-modality ontology, 0.920 (accuracy), 0.398 (precision), 1.000 (recall) and 0.569 (F-measure). In conclusion, results of the proposed model demonstrated better performance in order to reduce the semantic gap, enhance the semantic image retrieval performance and provide the easy way for the user to retrieve the herbal medicinal plant images

    Multi modal multi-semantic image retrieval

    Get PDF
    PhDThe rapid growth in the volume of visual information, e.g. image, and video can overwhelm users’ ability to find and access the specific visual information of interest to them. In recent years, ontology knowledge-based (KB) image information retrieval techniques have been adopted into in order to attempt to extract knowledge from these images, enhancing the retrieval performance. A KB framework is presented to promote semi-automatic annotation and semantic image retrieval using multimodal cues (visual features and text captions). In addition, a hierarchical structure for the KB allows metadata to be shared that supports multi-semantics (polysemy) for concepts. The framework builds up an effective knowledge base pertaining to a domain specific image collection, e.g. sports, and is able to disambiguate and assign high level semantics to ‘unannotated’ images. Local feature analysis of visual content, namely using Scale Invariant Feature Transform (SIFT) descriptors, have been deployed in the ‘Bag of Visual Words’ model (BVW) as an effective method to represent visual content information and to enhance its classification and retrieval. Local features are more useful than global features, e.g. colour, shape or texture, as they are invariant to image scale, orientation and camera angle. An innovative approach is proposed for the representation, annotation and retrieval of visual content using a hybrid technique based upon the use of an unstructured visual word and upon a (structured) hierarchical ontology KB model. The structural model facilitates the disambiguation of unstructured visual words and a more effective classification of visual content, compared to a vector space model, through exploiting local conceptual structures and their relationships. The key contributions of this framework in using local features for image representation include: first, a method to generate visual words using the semantic local adaptive clustering (SLAC) algorithm which takes term weight and spatial locations of keypoints into account. Consequently, the semantic information is preserved. Second a technique is used to detect the domain specific ‘non-informative visual words’ which are ineffective at representing the content of visual data and degrade its categorisation ability. Third, a method to combine an ontology model with xi a visual word model to resolve synonym (visual heterogeneity) and polysemy problems, is proposed. The experimental results show that this approach can discover semantically meaningful visual content descriptions and recognise specific events, e.g., sports events, depicted in images efficiently. Since discovering the semantics of an image is an extremely challenging problem, one promising approach to enhance visual content interpretation is to use any associated textual information that accompanies an image, as a cue to predict the meaning of an image, by transforming this textual information into a structured annotation for an image e.g. using XML, RDF, OWL or MPEG-7. Although, text and image are distinct types of information representation and modality, there are some strong, invariant, implicit, connections between images and any accompanying text information. Semantic analysis of image captions can be used by image retrieval systems to retrieve selected images more precisely. To do this, a Natural Language Processing (NLP) is exploited firstly in order to extract concepts from image captions. Next, an ontology-based knowledge model is deployed in order to resolve natural language ambiguities. To deal with the accompanying text information, two methods to extract knowledge from textual information have been proposed. First, metadata can be extracted automatically from text captions and restructured with respect to a semantic model. Second, the use of LSI in relation to a domain-specific ontology-based knowledge model enables the combined framework to tolerate ambiguities and variations (incompleteness) of metadata. The use of the ontology-based knowledge model allows the system to find indirectly relevant concepts in image captions and thus leverage these to represent the semantics of images at a higher level. Experimental results show that the proposed framework significantly enhances image retrieval and leads to narrowing of the semantic gap between lower level machinederived and higher level human-understandable conceptualisation

    Learning Multimodal Latent Attributes

    Get PDF
    Abstract—The rapid development of social media sharing has created a huge demand for automatic media classification and annotation techniques. Attribute learning has emerged as a promising paradigm for bridging the semantic gap and addressing data sparsity via transferring attribute knowledge in object recognition and relatively simple action classification. In this paper, we address the task of attribute learning for understanding multimedia data with sparse and incomplete labels. In particular we focus on videos of social group activities, which are particularly challenging and topical examples of this task because of their multi-modal content and complex and unstructured nature relative to the density of annotations. To solve this problem, we (1) introduce a concept of semi-latent attribute space, expressing user-defined and latent attributes in a unified framework, and (2) propose a novel scalable probabilistic topic model for learning multi-modal semi-latent attributes, which dramatically reduces requirements for an exhaustive accurate attribute ontology and expensive annotation effort. We show that our framework is able to exploit latent attributes to outperform contemporary approaches for addressing a variety of realistic multimedia sparse data learning tasks including: multi-task learning, learning with label noise, N-shot transfer learning and importantly zero-shot learning

    CROEQS: Contemporaneous Role Ontology-based Expanded Query Search: implementation and evaluation

    Get PDF
    Searching annotated items in multimedia databases becomes increasingly important. The traditional approach is to build a search engine based on textual metadata. However, in manually annotated multimedia databases, the conceptual level of what is searched for might differ from the high-levelness of the annotations of the items. To address this problem, we present CROEQS, a semantically enhanced search engine. It allows the user to query the annotated persons not only on their name, but also on their roles at the time the multimedia item was broadcast. We also present the ontology used to expand such queries: it allows us to semantically represent the domain knowledge on people fulfilling a role during a temporal interval in general, and politicians holding a political office specifically. The evaluation results show that query expansion using data retrieved from an ontology considerably filters the result set, although there is a performance penalty

    Objects that Sound

    Full text link
    In this paper our objectives are, first, networks that can embed audio and visual inputs into a common space that is suitable for cross-modal retrieval; and second, a network that can localize the object that sounds in an image, given the audio signal. We achieve both these objectives by training from unlabelled video using only audio-visual correspondence (AVC) as the objective function. This is a form of cross-modal self-supervision from video. To this end, we design new network architectures that can be trained for cross-modal retrieval and localizing the sound source in an image, by using the AVC task. We make the following contributions: (i) show that audio and visual embeddings can be learnt that enable both within-mode (e.g. audio-to-audio) and between-mode retrieval; (ii) explore various architectures for the AVC task, including those for the visual stream that ingest a single image, or multiple images, or a single image and multi-frame optical flow; (iii) show that the semantic object that sounds within an image can be localized (using only the sound, no motion or flow information); and (iv) give a cautionary tale on how to avoid undesirable shortcuts in the data preparation.Comment: Appears in: European Conference on Computer Vision (ECCV) 201

    An information assistant system for the prevention of tunnel vision in crisis management

    Get PDF
    In the crisis management environment, tunnel vision is a set of bias in decision makers’ cognitive process which often leads to incorrect understanding of the real crisis situation, biased perception of information, and improper decisions. The tunnel vision phenomenon is a consequence of both the challenges in the task and the natural limitation in a human being’s cognitive process. An information assistant system is proposed with the purpose of preventing tunnel vision. The system serves as a platform for monitoring the on-going crisis event. All information goes through the system before arrives at the user. The system enhances the data quality, reduces the data quantity and presents the crisis information in a manner that prevents or repairs the user’s cognitive overload. While working with such a system, the users (crisis managers) are expected to be more likely to stay aware of the actual situation, stay open minded to possibilities, and make proper decisions

    Overview of the 2005 cross-language image retrieval track (ImageCLEF)

    Get PDF
    The purpose of this paper is to outline efforts from the 2005 CLEF crosslanguage image retrieval campaign (ImageCLEF). The aim of this CLEF track is to explore the use of both text and content-based retrieval methods for cross-language image retrieval. Four tasks were offered in the ImageCLEF track: a ad-hoc retrieval from an historic photographic collection, ad-hoc retrieval from a medical collection, an automatic image annotation task, and a user-centered (interactive) evaluation task that is explained in the iCLEF summary. 24 research groups from a variety of backgrounds and nationalities (14 countries) participated in ImageCLEF. In this paper we describe the ImageCLEF tasks, submissions from participating groups and summarise the main fndings
    corecore