114,108 research outputs found

    Color image retrieval using taken images

    Get PDF
    Now-a-days in many applications content based image retrieval from large resources has become an area of wide interest. In this paper we present a color-based image retrieval system that uses color and texture as visual features to describe the content of an image region. To speed up retrieval and similarity computation, the database images are segmented and the extracted regions are clustered according to their feature vectors. This process is performed offline before query processing, therefore to answer a query our system need not search the entire database images; instead just a number of candidate images are required to be searched for image similarity. Our proposed system has the advantage of increasing the retrieval accuracy and decreasing the retrieval time. The experimental evaluation of the system is based on a 1,000 real taken color image database. From the experimental results, it is evident that our system performs significantly better and faster compared with other existing systems. In our analysis, we provide a comparison between retrieval results based on relevancy for the given ten classes. The results demonstrate that each type of feature is effective for a particular type of images according to its semantic contents, and using a combination of them gives better retrieval results for almost all semantic classes

    Textual Query Based Image Retrieval

    Get PDF
    As digital cameras becoming popular and mobile phones are increased very fast so that consumers photos are increased. So that retrieving the appropriate image depending on content or text based image retrieval techniques has become very vast. Content-based image retrieval, a technique which uses visual contents to search images from large scale image databases according to users interests, has been an active and fast advancing research area semantic gap between the low-level visual features and the high-level semantic concepts. Real-time textual query-based personal photo retrieval system by leveraging millions of Web images and their associated rich textual descriptions. Then user provides a textual query. Our system generates the inverted file to automatically find the positive Web images that are related to the textual query as well as the negative Web images that are irrelevant to the textual query. For that purpose we use k-Nearest Neighbor (kNN), Decision stumps, and linear SVM, to rank personal photos. For improvement of the photo retrieval performance, we have used two relevance feedback methods via cross-domain learning, which effectively utilize both the Web images and personal images. DOI: 10.17762/ijritcc2321-8169.15032

    A Prototype System using Lexical Chains for Web Images Retrieval Based on Text Description and Visual Features

    Get PDF
    Abstract--Content Based Image Retrieval, in the current scenario has not been analyzed adequate in the existing system. Here, we implement a prototype system for web based image retrieval. The system is based on description of images by lexical chains which are extracted from text related images in a web page. In this paper, we provide Relevance Feedback (RF) techniques that aim to the real world user requirements. The relevance feedback techniques, based on image text description are expanded to support image retrieval by combining textual and visual features. All the feedback techniques are implemented and compared with precision and recall criteria. The experimental results prove that retrieval methods that makes use of both text and visual features achieve overall better results than methods based only on image’s text description

    Visual intelligence for online communities : commonsense image retrieval by query expansion

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2004.Includes bibliographical references (leaves 65-67).This thesis explores three weaknesses of keyword-based image retrieval through the design and implementation of an actual image retrieval system. The first weakness is the requirement of heavy manual annotation of keywords for images. We investigate this weakness by aggregating the annotations of an entire community of users to alleviate the annotation requirements on the individual user. The second weakness is the hit-or-miss nature of exact keyword matching used in many existing image retrieval systems. We explore this weakness by using linguistics tools (WordNet and the OpenMind Commonsense database) to locate image keywords in a semantic network of interrelated concepts so that retrieval by keywords is automatically expanded semantically to avoid the hit-or-miss problem. Such semantic query expansion further alleviates the requirement for exhaustive manual annotation. The third weakness of keyword-based image retrieval systems is the lack of support for retrieval by subjective content. We investigate this weakness by creating a mechanism to allow users to annotate images by their subjective emotional content and subsequently to retrieve images by these emotions. This thesis is primarily an exploration of different keyword-based image retrieval techniques in a real image retrieval system. The design of the system is grounded in past research that sheds light onto how people actually encounter the task of describing images with words for future retrieval. The image retrieval system's front-end and back- end are fully integrated with the Treehouse Global Studio online community - an online environment with a suite of media design tools and database storage of media files and metadata.(cont.) The focus of the thesis is on exploring new user scenarios for keyword-based image retrieval rather than quantitative assessment of retrieval effectiveness. Traditional information retrieval evaluation metrics are discussed but not pursued. The user scenarios for our image retrieval system are analyzed qualitatively in terms of system design and how they facilitate the overall retrieval experience.James Jian Dai.S.M

    Indexing and Retrieving Photographic Images Using a Combination of Geo-Location and Content-Based Features

    Get PDF
    This paper presents a novel method that automatically indexes searches for relevant images using a combination of geo-coded information and content-based visual features. Photographic images are labeled with their corresponding GPS (Global Positioning System) coordinates and UTC time (Coordinated Universal Time) information at the moment of capture, which are then utilized to create spatial and temporal indexes for photograph retrieval. Assessing the performance in terms of average precision and F-score with real-world image collections revealed that the proposed approach significantly improved and enhanced the retrieval process compared to searches based on visual content alone. Combining content and context information thus offers a useful and meaningful new approach to searching and managing large image collections

    Investigating the Behavior of Compact Composite Descriptors in Early Fusion, Late Fusion and Distributed Image Retrieval

    Get PDF
    In Content-Based Image Retrieval (CBIR) systems, the visual content of the images is mapped into a new space named the feature space. The features that are chosen must be discriminative and sufficient for the description of the objects. The key to attaining a successful retrieval system is to choose the right features that represent the images as unique as possible. A feature is a set of characteristics of the image, such as color, texture, and shape. In addition, a feature can be enriched with information about the spatial distribution of the characteristic that it describes. Evaluation of the performance of low-level features is usually done on homogenous benchmarking databases with a limited number of images. In real-world image retrieval systems, databases have a much larger scale and may be heterogeneous. This paper investigates the behavior of Compact Composite Descriptors (CCDs) on heterogeneous databases of a larger scale. Early and late fusion techniques are tested and their performance in distributed image retrieval is calculated. This study demonstrates that, even if it is not possible to overcome the semantic gap in image retrieval by feature similarity, it is still possible to increase the retrieval effectiveness

    Content-based indexing of low resolution documents

    Get PDF
    In any multimedia presentation, the trend for attendees taking pictures of slides that interest them during the presentation using capturing devices is gaining popularity. To enhance the image usefulness, the images captured could be linked to image or video database. The database can be used for the purpose of file archiving, teaching and learning, research and knowledge management, which concern image search. However, the above-mentioned devices include cameras or mobiles phones have low resolution resulted from poor lighting and noise. Content-Based Image Retrieval (CBIR) is considered among the most interesting and promising fields as far as image search is concerned. Image search is related with finding images that are similar for the known query image found in a given image database. This thesis concerns with the methods used for the purpose of identifying documents that are captured using image capturing devices. In addition, the thesis also concerns with a technique that can be used to retrieve images from an indexed image database. Both concerns above apply digital image processing technique. To build an indexed structure for fast and high quality content-based retrieval of an image, some existing representative signatures and the key indexes used have been revised. The retrieval performance is very much relying on how the indexing is done. The retrieval approaches that are currently in existence including making use of shape, colour and texture features. Putting into consideration these features relative to individual databases, the majority of retrievals approaches have poor results on low resolution documents, consuming a lot of time and in the some cases, for the given query image, irrelevant images are obtained. The proposed identification and indexing method in the thesis uses a Visual Signature (VS). VS consists of the captures slides textual layout’s graphical information, shape’s moment and spatial distribution of colour. This approach, which is signature-based are considered for fast and efficient matching to fulfil the needs of real-time applications. The approach also has the capability to overcome the problem low resolution document such as noisy image, the environment’s varying lighting conditions and complex backgrounds. We present hierarchy indexing techniques, whose foundation are tree and clustering. K-means clustering are used for visual features like colour since their spatial distribution give a good image’s global information. Tree indexing for extracted layout and shape features are structured hierarchically and Euclidean distance is used to get similarity image for CBIR. The assessment of the proposed indexing scheme is conducted based on recall and precision, a standard CBIR retrieval performance evaluation. We develop CBIR system and conduct various retrieval experiments with the fundamental aim of comparing the accuracy during image retrieval. A new algorithm that can be used with integrated visual signatures, especially in late fusion query was introduced. The algorithm has the capability of reducing any shortcoming associated with normalisation in initial fusion technique. Slides from conferences, lectures and meetings presentation are used for comparing the proposed technique’s performances with that of the existing approaches with the help of real data. This finding of the thesis presents exciting possibilities as the CBIR systems is able to produce high quality result even for a query, which uses low resolution documents. In the future, the utilization of multimodal signatures, relevance feedback and artificial intelligence technique are recommended to be used in CBIR system to further enhance the performance

    The Parallel Distributed Image Search Engine (ParaDISE)

    Get PDF
    Image retrieval is a complex task that differs according to the context and the user requirements in any specific field, for example in a medical environment. Search by text is often not possible or optimal and retrieval by the visual content does not always succeed in modelling high-level concepts that a user is looking for. Modern image retrieval techniques consists of multiple steps and aim to retrieve information from large–scale datasets and not only based on global image appearance but local features and if possible in a connection between visual features and text or semantics. This paper presents the Parallel Distributed Image Search Engine (ParaDISE), an image retrieval system that combines visual search with text–based retrieval and that is available as open source and free of charge. The main design concepts of ParaDISE are flexibility, expandability, scalability and interoperability. These concepts constitute the system, able to be used both in real–world applications and as an image retrieval research platform. Apart from the architecture and the implementation of the system, two use cases are described, an application of ParaDISE in retrieval of images from the medical literature and a visual feature evaluation for medical image retrieval. Future steps include the creation of an open source community that will contribute and expand this platform based on the existing parts

    Toward Large Scale Semantic Image Understanding and Retrieval

    Get PDF
    Semantic image retrieval is a multifaceted, highly complex problem. Not only does the solution to this problem require advanced image processing and computer vision techniques, but it also requires knowledge beyond what can be inferred from the image content alone. In contrast, traditional image retrieval systems are based upon keyword searches on filenames or metadata tags, e.g. Google image search, Flickr search, etc. These conventional systems do not analyze the image content and their keywords are not guaranteed to represent the image. Thus, there is significant need for a semantic image retrieval system that can analyze and retrieve images based upon the content and relationships that exist in the real world.In this thesis, I present a framework that moves towards advancing semantic image retrieval in large scale datasets. At a conceptual level, semantic image retrieval requires the following steps: viewing an image, understanding the content of the image, indexing the important aspects of the image, connecting the image concepts to the real world, and finally retrieving the images based upon the index concepts or related concepts. My proposed framework addresses each of these components in my ultimate goal of improving image retrieval. The first task is the essential task of understanding the content of an image. Unfortunately, typically the only data used by a computer algorithm when analyzing images is the low-level pixel data. But, to achieve human level comprehension, a machine must overcome the semantic gap, or disparity that exists between the image data and human understanding. This translation of the low-level information into a high-level representation is an extremely difficult problem that requires more than the image pixel information. I describe my solution to this problem through the use of an online knowledge acquisition and storage system. This system utilizes the extensible, visual, and interactable properties of Scalable Vector Graphics (SVG) combined with online crowd sourcing tools to collect high level knowledge about visual content.I further describe the utilization of knowledge and semantic data for image understanding. Specifically, I seek to incorporate knowledge in various algorithms that cannot be inferred from the image pixels alone. This information comes from related images or structured data (in the form of hierarchies and ontologies) to improve the performance of object detection and image segmentation tasks. These understanding tasks are crucial intermediate steps towards retrieval and semantic understanding. However, the typical object detection and segmentation tasks requires an abundance of training data for machine learning algorithms. The prior training information provides information on what patterns and visual features the algorithm should be looking for when processing an image. In contrast, my algorithm utilizes related semantic images to extract the visual properties of an object and also to decrease the search space of my detection algorithm. Furthermore, I demonstrate the use of related images in the image segmentation process. Again, without the use of prior training data, I present a method for foreground object segmentation by finding the shared area that exists in a set of images. I demonstrate the effectiveness of my method on structured image datasets that have defined relationships between classes i.e. parent-child, or sibling classes.Finally, I introduce my framework for semantic image retrieval. I enhance the proposed knowledge acquisition and image understanding techniques with semantic knowledge through linked data and web semantic languages. This is an essential step in semantic image retrieval. For example, a car class classified by an image processing algorithm not enhanced by external knowledge would have no idea that a car is a type of vehicle which would also be highly related to a truck and less related to other transportation methods like a train . However, a query for modes of human transportation should return all of the mentioned classes. Thus, I demonstrate how to integrate information from both image processing algorithms and semantic knowledge bases to perform interesting queries that would otherwise be impossible. The key component of this system is a novel property reasoner that is able to translate low level image features into semantically relevant object properties. I use a combination of XML based languages such as SVG, RDF, and OWL in order to link to existing ontologies available on the web. My experiments demonstrate an efficient data collection framework and novel utilization of semantic data for image analysis and retrieval on datasets of people and landmarks collected from sources such as IMDB and Flickr. Ultimately, my thesis presents improvements to the state of the art in visual knowledge representation/acquisition and computer vision algorithms such as detection and segmentation toward the goal of enhanced semantic image retrieval

    Content-based image analysis with applications to the multifunction printer imaging pipeline and image databases

    Get PDF
    Image understanding is one of the most important topics for various applications. Most of image understanding studies focus on content-based approach while some others also rely on meta data of images. Image understanding includes several sub-topics such as classification, segmentation, retrieval and automatic annotation etc., which are heavily studied recently. This thesis proposes several new methods and algorithms for image classification, retrieval and automatic tag generation. The proposed algorithms have been tested and verified in multiple platforms. For image classification, our proposed method can complete classification in real-time under hardware constraints of all-in-one printer and adaptively improve itself by online learning. Another image understanding engine includes both classification and image quality analysis is designed to solve the optimal compression problem of printing system. Our proposed image retrieval algorithm can be applied to either PC or mobile device to improve the hybrid learning experience. We also develop a new matrix factorization algorithm to better recover the image meta data (tag). The proposed algorithm outperforms other existing matrix factorization methods
    • …
    corecore