10 research outputs found

    Enhanced image annotations based on spatial information extraction and ontologies

    No full text
    Current research on image annotation often represents images in terms of labelled regions or objects, but pays little attention to the spatial positions or relationships between those regions or objects. To be effective, general purpose image retrieval systems require images with comprehensive annotations describing fully the content of the image. Much research is being done on automatic image annotation schemes but few authors address the issue of spatial annotations directly. This paper begins with a brief analysis of real picture queries to librarians showing how spatial terms are used to formulate queries. The paper is then concerned with the development of an enhanced automatic image annotation system, which extracts spatial information about objects in the image. The approach uses region boundaries and region labels to generate annotations describing absolute object positions and also relative positions between pairs of objects. A domain ontology and spatial information ontology are also used to extract more complex information about the relative closeness of objects to the viewer

    Mind the Gap: Another look at the problem of the semantic gap in image retrieval

    No full text
    This paper attempts to review and characterise the problem of the semantic gap in image retrieval and the attempts being made to bridge it. In particular, we draw from our own experience in user queries, automatic annotation and ontological techniques. The first section of the paper describes a characterisation of the semantic gap as a hierarchy between the raw media and full semantic understanding of the media's content. The second section discusses real users' queries with respect to the semantic gap. The final sections of the paper describe our own experience in attempting to bridge the semantic gap. In particular we discuss our work on auto-annotation and semantic-space models of image retrieval in order to bridge the gap from the bottom up, and the use of ontologies, which capture more semantics than keyword object labels alone, as a technique for bridging the gap from the top down

    Toward a Multilevel Cognitive Probabilistic Representation of Space

    Get PDF
    This paper addresses the problem of perception and representation of space for a mobile agent. A probabilistic hierarchical framework is suggested as a solution to this problem. The method proposed is a combination of probabilistic belief with Object Graph Models(OGM). The world is viewed from a topological optic, in terms of objects and relationships between them. The hierarchical representation that we propose permits an efficient and reliable modeling of the information that the mobile agent would perceive from its environment. The integration of both navigational and interactional capabilities through efficient representation is also addressed. Experiments on a set of images taken from the real world that validate the approach are reported. This framework draws on the general understanding of human cognition and perception and contributes towards the overall efforts to build cognitive robot companions

    Semantic Annotation for Retrieval of Visual Resources

    Get PDF
    Beeldmateriaal speelt een steeds grotere rol in onze cultuur, maar ook in de wetenschap en in het onderwijs. Zoeken in grote collecties beeldmateriaal blijft echter een moeizaam proces. Het kost een eindgebruiker veel tijd en moeite om juist dat ene beeld te vinden. Daarom zijn er efficiënte zoekmethoden nodig om de groeiende collecties doorzoekbaar te maken en te houden. Laura Hollink onderzoekt de problemen bij het zoeken naar beeldmateriaal en de mogelijke oplossingen daarvoor, in drie uiteenlopende collecties: schilderijen, foto’s van organische cellen en nieuwsuitzendingen.Schreiber, A.T. [Promotor]Wielinga, B.J. [Promotor]Worring, M. [Copromotor

    Toward Large Scale Semantic Image Understanding and Retrieval

    Get PDF
    Semantic image retrieval is a multifaceted, highly complex problem. Not only does the solution to this problem require advanced image processing and computer vision techniques, but it also requires knowledge beyond what can be inferred from the image content alone. In contrast, traditional image retrieval systems are based upon keyword searches on filenames or metadata tags, e.g. Google image search, Flickr search, etc. These conventional systems do not analyze the image content and their keywords are not guaranteed to represent the image. Thus, there is significant need for a semantic image retrieval system that can analyze and retrieve images based upon the content and relationships that exist in the real world.In this thesis, I present a framework that moves towards advancing semantic image retrieval in large scale datasets. At a conceptual level, semantic image retrieval requires the following steps: viewing an image, understanding the content of the image, indexing the important aspects of the image, connecting the image concepts to the real world, and finally retrieving the images based upon the index concepts or related concepts. My proposed framework addresses each of these components in my ultimate goal of improving image retrieval. The first task is the essential task of understanding the content of an image. Unfortunately, typically the only data used by a computer algorithm when analyzing images is the low-level pixel data. But, to achieve human level comprehension, a machine must overcome the semantic gap, or disparity that exists between the image data and human understanding. This translation of the low-level information into a high-level representation is an extremely difficult problem that requires more than the image pixel information. I describe my solution to this problem through the use of an online knowledge acquisition and storage system. This system utilizes the extensible, visual, and interactable properties of Scalable Vector Graphics (SVG) combined with online crowd sourcing tools to collect high level knowledge about visual content.I further describe the utilization of knowledge and semantic data for image understanding. Specifically, I seek to incorporate knowledge in various algorithms that cannot be inferred from the image pixels alone. This information comes from related images or structured data (in the form of hierarchies and ontologies) to improve the performance of object detection and image segmentation tasks. These understanding tasks are crucial intermediate steps towards retrieval and semantic understanding. However, the typical object detection and segmentation tasks requires an abundance of training data for machine learning algorithms. The prior training information provides information on what patterns and visual features the algorithm should be looking for when processing an image. In contrast, my algorithm utilizes related semantic images to extract the visual properties of an object and also to decrease the search space of my detection algorithm. Furthermore, I demonstrate the use of related images in the image segmentation process. Again, without the use of prior training data, I present a method for foreground object segmentation by finding the shared area that exists in a set of images. I demonstrate the effectiveness of my method on structured image datasets that have defined relationships between classes i.e. parent-child, or sibling classes.Finally, I introduce my framework for semantic image retrieval. I enhance the proposed knowledge acquisition and image understanding techniques with semantic knowledge through linked data and web semantic languages. This is an essential step in semantic image retrieval. For example, a car class classified by an image processing algorithm not enhanced by external knowledge would have no idea that a car is a type of vehicle which would also be highly related to a truck and less related to other transportation methods like a train . However, a query for modes of human transportation should return all of the mentioned classes. Thus, I demonstrate how to integrate information from both image processing algorithms and semantic knowledge bases to perform interesting queries that would otherwise be impossible. The key component of this system is a novel property reasoner that is able to translate low level image features into semantically relevant object properties. I use a combination of XML based languages such as SVG, RDF, and OWL in order to link to existing ontologies available on the web. My experiments demonstrate an efficient data collection framework and novel utilization of semantic data for image analysis and retrieval on datasets of people and landmarks collected from sources such as IMDB and Flickr. Ultimately, my thesis presents improvements to the state of the art in visual knowledge representation/acquisition and computer vision algorithms such as detection and segmentation toward the goal of enhanced semantic image retrieval

    Semantic multimedia modelling & interpretation for search & retrieval

    Get PDF
    With the axiomatic revolutionary in the multimedia equip devices, culminated in the proverbial proliferation of the image and video data. Owing to this omnipresence and progression, these data become the part of our daily life. This devastating data production rate accompanies with a predicament of surpassing our potentials for acquiring this data. Perhaps one of the utmost prevailing problems of this digital era is an information plethora. Until now, progressions in image and video retrieval research reached restrained success owed to its interpretation of an image and video in terms of primitive features. Humans generally access multimedia assets in terms of semantic concepts. The retrieval of digital images and videos is impeded by the semantic gap. The semantic gap is the discrepancy between a user’s high-level interpretation of an image and the information that can be extracted from an image’s physical properties. Content- based image and video retrieval systems are explicitly assailable to the semantic gap due to their dependence on low-level visual features for describing image and content. The semantic gap can be narrowed by including high-level features. High-level descriptions of images and videos are more proficient of apprehending the semantic meaning of image and video content. It is generally understood that the problem of image and video retrieval is still far from being solved. This thesis proposes an approach for intelligent multimedia semantic extraction for search and retrieval. This thesis intends to bridge the gap between the visual features and semantics. This thesis proposes a Semantic query Interpreter for the images and the videos. The proposed Semantic Query Interpreter will select the pertinent terms from the user query and analyse it lexically and semantically. The proposed SQI reduces the semantic as well as the vocabulary gap between the users and the machine. This thesis also explored a novel ranking strategy for image search and retrieval. SemRank is the novel system that will incorporate the Semantic Intensity (SI) in exploring the semantic relevancy between the user query and the available data. The novel Semantic Intensity captures the concept dominancy factor of an image. As we are aware of the fact that the image is the combination of various concepts and among the list of concepts some of them are more dominant then the other. The SemRank will rank the retrieved images on the basis of Semantic Intensity. The investigations are made on the LabelMe image and LabelMe video dataset. Experiments show that the proposed approach is successful in bridging the semantic gap. The experiments reveal that our proposed system outperforms the traditional image retrieval systems

    Emergsem : une approche d'annotation collaborative et de recherche d'images basée sur les sémantiques émergentes

    Get PDF
    The extraction of images semantic is a process that requires deep analysis of the image content. It refers to their interpretation from a human point of view. In this lastest case, the image semantic may be generic (e.g., a vehicle) or specific (e.g., a bicycle). It consists in extracting single or multiple images semantic in order to facilitate its retrieval. These objectives clearly show that the extraction of semantic is not a new research field. This thesis deals with the semantic collaborative annotation of images and their retrieval. Firstly, it discusses how annotators could describe and represent images content based on visual information, and secondly how images retrieval could be greatly improved thank to latest techniques, such as clustering and recommendation. To achieve these purposes, the use of implicit image content description tools, interactions of annotators that describe the semantics of images and those of users that use generated semantics to retrieve the images, would be essential. In this thesis, we focus our research on the use of Semantic Web tools, in particular ontologies to produce structured descriptions of images. Ontology is used to represent image objects and the relationships between these objects. In other words, it allows to formally represent the different types of objects and their relationships. Ontology encodes the relational structure of concepts that can be used to describe and reason. This makes them eminently adapted to many problems such as semantic description of images that requires prior knowledge as well as descriptive and normative capacity. The contribution of this thesis is focused on three main points : semantic representation, collaborative semantic annotation and semantic retrieval of images.Semantic representation allows to offer a tool for the capturing semantics of images. To capture the semantics of images, we propose an application ontology derived from a generic ontology. Collaborative semantic annotation that we define, provides emergent semantics through the fusion of semantics proposed by the annotators.Semantic retrieval allows to look for images with semantics provided by collaborative semantic annotation. It is based on clustering and recommendation. Clustering is used to group similar images corresponding to the user’s query and recommendation aims to propose semantics to users based on their profiles. It consists of three steps : creation of users community, acquiring of user profiles and classification of user profiles with Galois algebra. Experiments were conducted to validate the approaches proposed in this work.L’extraction de la sémantique d’une image est un processus qui nécessite une analyse profonde du contenu de l’image. Elle se réfère à leur interprétation à partir d’un point de vuehumain. Dans ce dernier cas, la sémantique d’une image pourrait être générique (par exemple un véhicule) ou spécifique (par exemple une bicyclette). Elle consiste à extraire une sémantique simple ou multiple de l’image afin de faciliter sa récupération. Ces objectifs indiquent clairement que l’extraction de la sémantique n’est pas un nouveau domaine de recherche. Cette thèse traite d’une approche d’annotation collaborative et de recherche d’images baséesur les sémantiques émergentes. Il aborde d’une part, la façon dont les annotateurs pourraient décrire et représenter le contenu des images en se basant sur les informations visuelles, et d’autre part comment la recherche des images pourrait être considérablement améliorée grâce aux récentes techniques, notamment le clustering et la recommandation. Pour atteindre ces objectifs, l’exploitation des outils de description implicite du contenu des images, des interactions des annotateurs qui décrivent la sémantique des images et celles des utilisateurs qui utilisent la sémantique produite pour rechercher les images seraient indispensables.Dans cette thèse, nous nous sommes penchés vers les outils duWeb Sémantique, notamment les ontologies pour décrire les images de façon structurée. L’ontologie permet de représenter les objets présents dans une image ainsi que les relations entre ces objets (les scènes d’image). Autrement dit, elle permet de représenter de façon formelle les différents types d’objets et leurs relations. L’ontologie code la structure relationnelle des concepts que l’on peut utiliser pour décrire et raisonner. Cela la rend éminemment adaptée à de nombreux problèmes comme la description sémantique des images qui nécessite une connaissance préalable et une capacité descriptive et normative.La contribution de cette thèse est focalisée sur trois points essentiels : La représentationsémantique, l’annotation sémantique collaborative et la recherche sémantique des images.La représentation sémantique permet de proposer un outil capable de représenter la sémantique des images. Pour capturer la sémantique des images, nous avons proposé une ontologie d’application dérivée d’une ontologie générique.L’annotation sémantique collaborative que nous proposons consiste à faire émerger la sémantique des images à partir des sémantiques proposées par une communauté d’annotateurs.La recherche sémantique permet de rechercher les images avec les sémantiques fournies par l’annotation sémantique collaborative. Elle est basée sur deux techniques : le clustering et la recommandation. Le clustering permet de regrouper les images similaires à la requête d’utilisateur et la recommandation a pour objectif de proposer des sémantiques aux utilisateurs en se basant sur leurs profils statiques et dynamiques. Elle est composée de trois étapes à savoir : la formation de la communauté des utilisateurs, l’acquisition des profils d’utilisateurs et la classification des profils d’utilisateurs avec l’algèbre de Galois. Des expérimentations ont été menées pour valider les différentes approches proposées dans ce travail

    Image retrieval using automatic region tagging

    Get PDF
    The task of tagging, annotating or labelling image content automatically with semantic keywords is a challenging problem. To automatically tag images semantically based on the objects that they contain is essential for image retrieval. In addressing these problems, we explore the techniques developed to combine textual description of images with visual features, automatic region tagging and region-based ontology image retrieval. To evaluate the techniques, we use three corpora comprising: Lonely Planet travel guide articles with images, Wikipedia articles with images and Goats comic strips. In searching for similar images or textual information specified in a query, we explore the unification of textual descriptions and visual features (such as colour and texture) of the images. We compare the effectiveness of using different retrieval similarity measures for the textual component. We also analyse the effectiveness of different visual features extracted from the images. We then investigate the best weight combination of using textual and visual features. Using the queries from the Multimedia Track of INEX 2005 and 2006, we found that the best weight combination significantly improves the effectiveness of the retrieval system. Our findings suggest that image regions are better in capturing the semantics, since we can identify specific regions of interest in an image. In this context, we develop a technique to tag image regions with high-level semantics. This is done by combining several shape feature descriptors and colour, using an equal-weight linear combination. We experimentally compare this technique with more complex machine-learning algorithms, and show that the equal-weight linear combination of shape features is simpler and at least as effective as using a machine learning algorithm. We focus on the synergy between ontology and image annotations with the aim of reducing the gap between image features and high-level semantics. Ontologies ease information retrieval. They are used to mine, interpret, and organise knowledge. An ontology may be seen as a knowledge base that can be used to improve the image retrieval process, and conversely keywords obtained from automatic tagging of image regions may be useful for creating an ontology. We engineer an ontology that surrogates concepts derived from image feature descriptors. We test the usability of the constructed ontology by querying the ontology via the Visual Ontology Query Interface, which has a formally specified grammar known as the Visual Ontology Query Language. We show that synergy between ontology and image annotations is possible and this method can reduce the gap between image features and high-level semantics by providing the relationships between objects in the image. In this thesis, we conclude that suitable techniques for image retrieval include fusing text accompanying the images with visual features, automatic region tagging and using an ontology to enrich the semantic meaning of the tagged image regions

    Adding spatial semantics to image annotations

    No full text
    Abstract. In this paper we discuss the support of users in adding spatial information semi-automatically to annotations of images. Descriptions of objects depicted in an image are extended with information about the position of those objects. We distinguish two types of spatial concepts: absolute positions of objects (e.g., east, west) and relative spatial relations between objects (e.g., left, above). We show the use of a spatial annotation tool for a collection of art paintings with pre-existing RDF annotations. A small evaluation study is reported in which annotations generated by the tool are compared to manual annotations by ten volunteers.
    corecore