36,251 research outputs found

    Textual Query Based Image Retrieval

    Get PDF
    As digital cameras becoming popular and mobile phones are increased very fast so that consumers photos are increased. So that retrieving the appropriate image depending on content or text based image retrieval techniques has become very vast. Content-based image retrieval, a technique which uses visual contents to search images from large scale image databases according to users interests, has been an active and fast advancing research area semantic gap between the low-level visual features and the high-level semantic concepts. Real-time textual query-based personal photo retrieval system by leveraging millions of Web images and their associated rich textual descriptions. Then user provides a textual query. Our system generates the inverted file to automatically find the positive Web images that are related to the textual query as well as the negative Web images that are irrelevant to the textual query. For that purpose we use k-Nearest Neighbor (kNN), Decision stumps, and linear SVM, to rank personal photos. For improvement of the photo retrieval performance, we have used two relevance feedback methods via cross-domain learning, which effectively utilize both the Web images and personal images. DOI: 10.17762/ijritcc2321-8169.15032

    Promising Large Scale Image Retrieval by Using Intelligent Semantic Binary Code Generation Technique

    Get PDF
    AbstractA scalable content based image retrieval system for large-scale www database is designed and implemented. Million images on internet is big challenge for accurate and efficient image retrieval as per user requirement. Proposed system exploits semantic binary code generation techniques with semantic hashing function, fine and coarse similarity measure technique, automatic and manual relevance feedback technique which improve accuracy, speed of image retrieval. With dramatic growth of internet technology, scalable image retrieval system is a need of recent web based image retrieval applications such as biomedical imaging, medical diagnosis, space science application etc. Proposed system accomplish requirement of scalable, accurate and swift image retrieval system. Experimental result clearly shows that performance of image retrieval is improved in term of accuracy, efficiency and retrieval time

    The Designing of an Image Management System for Lecturing and Learning

    Get PDF
    [[abstract]]We present an image management system, called CanFind, for lecturing and learning. The designed web-based image management system aims to facilitate lecture and learning activity design purposes. It supports the following special features such as user personalization, semantic image retrieval, and systematic browsing. In order to provide semantic image retrieval feature, we integrate keyword extraction and keyword expansion schemes in the construction of indexing for the corresponding images. As a result, the demand images can be retrieved under the abstract level of images. Meanwhile, images upload and sharing functions are embedded in the system. As a result users may share their personal works among users. We have implemented the above system for the online and classroom lecture and the learning activity designs at Tamkang University.[[conferencetype]]國際[[conferencedate]]20020909~20020912[[conferencelocation]]Kazan, Tatarstan, Russi

    CanFind-a semantic image indexing and retrieval system

    Get PDF
    [[abstract]]We present CanFind, a semantic image indexing and retrieval system in this paper. To identify the target images of interest in the database in the conceptual level, the presented system makes use of keywords as the input of searching vehicle. The system consists of two subsystems, i.e., semantic indexing and query expansion. In the semantic indexing, the subsystem includes three main building blocks, namely, keyword extraction, keyword expansion, and keyword weighting. The information of WordNet is used to extend existing keywords associated with images. This design intends to overcome the drawbacks in conventional keyword-based image retrieval system. Next, the resulting word set is filtered by a filter to extract common words from the word set and set up the image indexing for the corresponding image. In the query expansion, corpus is used to help users find relative or precise results in the facing dilemma of too few or too many query results for a given query. The designed semantic image indexing and retrieval system is integrated with IWiLL, a web-based language learning platform to further illustrate the value of the designed system.[[conferencetype]]國際[[conferencedate]]20030525~20030528[[booktype]]紙本[[conferencelocation]]Bangkok, Thailan

    Toward Large Scale Semantic Image Understanding and Retrieval

    Get PDF
    Semantic image retrieval is a multifaceted, highly complex problem. Not only does the solution to this problem require advanced image processing and computer vision techniques, but it also requires knowledge beyond what can be inferred from the image content alone. In contrast, traditional image retrieval systems are based upon keyword searches on filenames or metadata tags, e.g. Google image search, Flickr search, etc. These conventional systems do not analyze the image content and their keywords are not guaranteed to represent the image. Thus, there is significant need for a semantic image retrieval system that can analyze and retrieve images based upon the content and relationships that exist in the real world.In this thesis, I present a framework that moves towards advancing semantic image retrieval in large scale datasets. At a conceptual level, semantic image retrieval requires the following steps: viewing an image, understanding the content of the image, indexing the important aspects of the image, connecting the image concepts to the real world, and finally retrieving the images based upon the index concepts or related concepts. My proposed framework addresses each of these components in my ultimate goal of improving image retrieval. The first task is the essential task of understanding the content of an image. Unfortunately, typically the only data used by a computer algorithm when analyzing images is the low-level pixel data. But, to achieve human level comprehension, a machine must overcome the semantic gap, or disparity that exists between the image data and human understanding. This translation of the low-level information into a high-level representation is an extremely difficult problem that requires more than the image pixel information. I describe my solution to this problem through the use of an online knowledge acquisition and storage system. This system utilizes the extensible, visual, and interactable properties of Scalable Vector Graphics (SVG) combined with online crowd sourcing tools to collect high level knowledge about visual content.I further describe the utilization of knowledge and semantic data for image understanding. Specifically, I seek to incorporate knowledge in various algorithms that cannot be inferred from the image pixels alone. This information comes from related images or structured data (in the form of hierarchies and ontologies) to improve the performance of object detection and image segmentation tasks. These understanding tasks are crucial intermediate steps towards retrieval and semantic understanding. However, the typical object detection and segmentation tasks requires an abundance of training data for machine learning algorithms. The prior training information provides information on what patterns and visual features the algorithm should be looking for when processing an image. In contrast, my algorithm utilizes related semantic images to extract the visual properties of an object and also to decrease the search space of my detection algorithm. Furthermore, I demonstrate the use of related images in the image segmentation process. Again, without the use of prior training data, I present a method for foreground object segmentation by finding the shared area that exists in a set of images. I demonstrate the effectiveness of my method on structured image datasets that have defined relationships between classes i.e. parent-child, or sibling classes.Finally, I introduce my framework for semantic image retrieval. I enhance the proposed knowledge acquisition and image understanding techniques with semantic knowledge through linked data and web semantic languages. This is an essential step in semantic image retrieval. For example, a car class classified by an image processing algorithm not enhanced by external knowledge would have no idea that a car is a type of vehicle which would also be highly related to a truck and less related to other transportation methods like a train . However, a query for modes of human transportation should return all of the mentioned classes. Thus, I demonstrate how to integrate information from both image processing algorithms and semantic knowledge bases to perform interesting queries that would otherwise be impossible. The key component of this system is a novel property reasoner that is able to translate low level image features into semantically relevant object properties. I use a combination of XML based languages such as SVG, RDF, and OWL in order to link to existing ontologies available on the web. My experiments demonstrate an efficient data collection framework and novel utilization of semantic data for image analysis and retrieval on datasets of people and landmarks collected from sources such as IMDB and Flickr. Ultimately, my thesis presents improvements to the state of the art in visual knowledge representation/acquisition and computer vision algorithms such as detection and segmentation toward the goal of enhanced semantic image retrieval

    Semantical representation and retrieval of natural photographs and medical images using concept and context-based feature spaces

    Get PDF
    The growth of image content production and distribution over the world has exploded in recent years. This creates a compelling need for developing innovative tools for managing and retrieving images for many applications, such as digital libraries, web image search engines, medical decision support systems, and so on. Until now, content-based image retrieval (CBIR) addresses the problem of finding images by automatically extracting low-level visual features, such as odor, texture, shape, etc. with limited success. The main limitation is due to the large semantic gap that currently exists between the high-level semantic concepts that users naturally associate with images and the low-level visual features that the system is relying upon. Research for the retrieval of images by semantic contents is still in its infancy. A successful solution to bridge or at least narrow the semantic gap requires the investigation of techniques from multiple fields. In addition, specialized retrieval solutions need to emerge, each of which should focus on certain types of image domains, users search requirements and applications objectivity. This work is motivated by a multi-disciplinary research effort and focuses on semantic-based image search from a domain perspective with an emphasis on natural photography and biomedical image databases. More precisely, we propose novel image representation and retrieval methods by transforming low-level feature spaces into concept-based feature spaces using statistical learning techniques. To this end, we perform supervised classification for modeling of semantic concepts and unsupervised clustering for constructing codebook of visual concepts to represent images in higher levels of abstraction for effective retrieval. Generalizing upon vector space model of Information Retrieval, we also investigate automatic query expansion techniques from a new perspective to reduce concept mismatch problem by analyzing their correlations information at both local and global levels in a collection. In addition, to perform retrieval in a complete semantic level, we propose an adaptive fusion-based retrieval technique in content and context-based feature spaces based on relevance feedback information from users. We developed a prototype image retrieval system as a part of the CINDI (Concordia INdexing and DIscovery system) digital library project, to perform exhaustive experimental evaluations and show the effectiveness of our retrieval approaches in both narrow and broad domains of application

    Analysis Of Reranking Techniques For Web Image Search With Attribute –Assisted

    Get PDF
    Many commercial search engines such as Google, Yahoo and Bing have been adopted this strategy. The search engines are mostly based on text and constrained due to user search by keyword which results into ambiguity among images. The noisy or irrelevant images may be present in the retrieved results. The purpose of web image search re-ranking is to reorder retrieved elements to get optimal rank list. The existing visual reranking schemes improve text-based search results by making the use of visual information. These methods are based on low-level visual features, and do not take into account the semantic relationship among images. Semantic attribute assisted re-ranking is proposed for web image search. Using the classifiers for predefined attributes, each image is represented by attribute features. The hypergraph is used to model the relationship between images. Hypergraph ranking is carried out to order the images. The basic principle is that similar images should have similar ranking. This paper presents a detail review of different image retrieval and reranking approaches. The purpose of the survey is to provide an overview and analysis of the functionality, merits, and demerits of the existing image reranking systems, which can be useful for researchers for developing effective system with more accuracy

    Utilising semantic technologies for intelligent indexing and retrieval of digital images

    Get PDF
    The proliferation of digital media has led to a huge interest in classifying and indexing media objects for generic search and usage. In particular, we are witnessing colossal growth in digital image repositories that are difficult to navigate using free-text search mechanisms, which often return inaccurate matches as they in principle rely on statistical analysis of query keyword recurrence in the image annotation or surrounding text. In this paper we present a semantically-enabled image annotation and retrieval engine that is designed to satisfy the requirements of the commercial image collections market in terms of both accuracy and efficiency of the retrieval process. Our search engine relies on methodically structured ontologies for image annotation, thus allowing for more intelligent reasoning about the image content and subsequently obtaining a more accurate set of results and a richer set of alternatives matchmaking the original query. We also show how our well-analysed and designed domain ontology contributes to the implicit expansion of user queries as well as the exploitation of lexical databases for explicit semantic-based query expansion
    corecore