60 research outputs found

    PENGEMBANGAN SISTEM TEMU KEMBALI CITRA DENGAN MULTIMODAL DATA MENGGUNAKAN MICROSTRUCTURE DESCRIPTOR DAN PLSA

    Get PDF
    Content Based Image Retrieval (CBIR) adalah sistem yang bertujuan untuk mencari sejumlah citra yang relevan berdasarkan data visual citra. Dalam perkembangannya, CBIR kemudian melibatkan pula data tekstual pada citra (multimodal data) dengan mencari korelasi antara data visual dan tekstual yang dapat dihitung menggunakan algoritma Probabilistic Latent Semantic Analysis (PLSA). Penelitian ini menggabungkan algoritma ekstraksi fitur Microstructure Descriptor (MSD) dan PLSA untuk membangun sistem CBIR multimodal data yang memiliki waktu komputasi lebih cepat. Sebagai pembanding, sistem PLSA-MSD yang dibangun dibandingkan dengan PLSA-SIFT yang digunakan pada penelitian yang telah ada. Hasil uji coba menunjukkan bahwa kombinasi PLSA-MSD 300% lebih cepat daripada PLSA-SIFT. Kata Kunci: PLSA, CBIR, MSD, SIFT, Auto anotas

    Aggregating Local Features into Bundles for High-Precision Object Retrieval

    Get PDF
    Due to the omnipresence of digital cameras and mobile phones the number of images stored in image databases has grown tremendously in the last years. It becomes apparent that new data management and retrieval techniques are needed to deal with increasingly large image databases. This thesis presents new techniques for content-based image retrieval where the image content itself is used to retrieve images by visual similarity from databases. We focus on the query-by-example scenario, assuming the image itself is provided as query to the retrieval engine. In many image databases, images are often associated with metadata, which may be exploited to improve the retrieval performance. In this work, we present a technique that fuses cues from the visual domain and textual annotations into a single compact representation. This combined multimodal representation performs significantly better compared to the underlying unimodal representations, which we demonstrate on two large-scale image databases consisting of up to 10 million images. The main focus of this work is on feature bundling for object retrieval and logo recognition. We present two novel feature bundling techniques that aggregate multiple local features into a single visual description. In contrast to many other works, both approaches encode geometric information about the spatial layout of local features into the corresponding visual description itself. Therefore, these descriptions are highly distinctive and suitable for high-precision object retrieval. We demonstrate the use of both bundling techniques for logo recognition. Here, the recognition is performed by the retrieval of visually similar images from a database of reference images, making the recognition systems easily scalable to a large number of classes. The results show that our retrieval-based methods can successfully identify small objects such as logos with an extremely low false positive rate. In particular, our feature bundling techniques are beneficial because false positives are effectively avoided upfront due to the highly distinctive descriptions. We further demonstrate and thoroughly evaluate the use of our bundling technique based on min-Hashing for image and object retrieval. Compared to approaches based on conventional bag-of-words retrieval, it has much higher efficiency: the retrieved result lists are shorter and cleaner while recall is on equal level. The results suggest that this bundling scheme may act as pre-filtering step in a wide range of scenarios and underline the high effectiveness of this approach. Finally, we present a new variant for extremely fast re-ranking of retrieval results, which ranks the retrieved images according to the spatial consistency of their local features to those of the query image. The demonstrated method is robust to outliers, performs better than existing methods and allows to process several hundreds to thousands of images per second on a single thread

    Discovering a Domain Knowledge Representation for Image Grouping: Multimodal Data Modeling, Fusion, and Interactive Learning

    Get PDF
    In visually-oriented specialized medical domains such as dermatology and radiology, physicians explore interesting image cases from medical image repositories for comparative case studies to aid clinical diagnoses, educate medical trainees, and support medical research. However, general image classification and retrieval approaches fail in grouping medical images from the physicians\u27 viewpoint. This is because fully-automated learning techniques cannot yet bridge the gap between image features and domain-specific content for the absence of expert knowledge. Understanding how experts get information from medical images is therefore an important research topic. As a prior study, we conducted data elicitation experiments, where physicians were instructed to inspect each medical image towards a diagnosis while describing image content to a student seated nearby. Experts\u27 eye movements and their verbal descriptions of the image content were recorded to capture various aspects of expert image understanding. This dissertation aims at an intuitive approach to extracting expert knowledge, which is to find patterns in expert data elicited from image-based diagnoses. These patterns are useful to understand both the characteristics of the medical images and the experts\u27 cognitive reasoning processes. The transformation from the viewed raw image features to interpretation as domain-specific concepts requires experts\u27 domain knowledge and cognitive reasoning. This dissertation also approximates this transformation using a matrix factorization-based framework, which helps project multiple expert-derived data modalities to high-level abstractions. To combine additional expert interventions with computational processing capabilities, an interactive machine learning paradigm is developed to treat experts as an integral part of the learning process. Specifically, experts refine medical image groups presented by the learned model locally, to incrementally re-learn the model globally. This paradigm avoids the onerous expert annotations for model training, while aligning the learned model with experts\u27 sense-making

    Bag-of-Words Representation in Image Annotation: A Review

    Get PDF

    Semantic multimedia analysis using knowledge and context

    Get PDF
    PhDThe difficulty of semantic multimedia analysis can be attributed to the extended diversity in form and appearance exhibited by the majority of semantic concepts and the difficulty to express them using a finite number of patterns. In meeting this challenge there has been a scientific debate on whether the problem should be addressed from the perspective of using overwhelming amounts of training data to capture all possible instantiations of a concept, or from the perspective of using explicit knowledge about the concepts’ relations to infer their presence. In this thesis we address three problems of pattern recognition and propose solutions that combine the knowledge extracted implicitly from training data with the knowledge provided explicitly in structured form. First, we propose a BNs modeling approach that defines a conceptual space where both domain related evi- dence and evidence derived from content analysis can be jointly considered to support or disprove a hypothesis. The use of this space leads to sig- nificant gains in performance compared to analysis methods that can not handle combined knowledge. Then, we present an unsupervised method that exploits the collective nature of social media to automatically obtain large amounts of annotated image regions. By proving that the quality of the obtained samples can be almost as good as manually annotated images when working with large datasets, we significantly contribute towards scal- able object detection. Finally, we introduce a method that treats images, visual features and tags as the three observable variables of an aspect model and extracts a set of latent topics that incorporates the semantics of both visual and tag information space. By showing that the cross-modal depen- dencies of tagged images can be exploited to increase the semantic capacity of the resulting space, we advocate the use of all existing information facets in the semantic analysis of social media

    Multimodal information spaces for content-based image retrieval

    Get PDF
    Abstract. Image collections today are increasingly larger in size, and they continue to grow constantly. Without the help of image search systems these abundant visual records collected in many different fields and domains may remain unused and inaccessible. Many available image databases often contain complementary modalities, such as attached text resources, which can be used to build an index for querying with keywords. However, sometimes users do not have or do not know the right words to express what they need, and, in addition, keywords do not express all the visual variations that an image may contain. Using example images as queries can be viewed as an alternative in different scenarios such as searching images using a mobile phone with a coupled camera, or supporting medical diagnosis by searching a large medical image collection. Still, matching only visual features between the query and image databases may lead to undesirable results from the user's perspective. These conditions make the process of finding relevant images for a specific information need very challenging, time consuming or even frustrating. Instead of considering only a single data modality to build image search indexes, the simultaneous use of both, visual and text data modalities, has been suggested. Non-visual information modalities may provide complementary information to enrich the image representation. The goal of this research work is to study the relationships between visual contents and text terms to build useful indexes for image search. A family of algorithms based on matrix factorization are proposed for extracting the multimodal aspects from an image collection. Using this knowledge about how visual features and text terms correlate, a search index is constructed, which can be searched using keywords, example images or combinations of both. Systematic experiments were conducted on different data sets to evaluate the proposed indexing algorithms. The experimental results showed that multimodal indexing is an effective strategy for designing image search systems.Las colecciones de imágenes hoy en día son muy grandes y crecen constantemente. Sin la ayuda de sistemas para la búsqueda de imágenes esos abundantes registros visuales que han sido recolectados en diferentes areas del conocimiento pueden permanecer aislados sin uso. Muchas bases de datos de imágenes contienen modalidades de datos complementarias, como los recursos textuales que pueden ser utilizados para crear índices de búsqueda. Sin embargo, algunas veces los usuarios no tienen o no saben qué palabras utilizar para encontrar lo que necesitan, y adicionalmente, las palabras clave no expresan todas las variaciones visuales que una imagen puede tener. Utilizar imágenes de ejemplo para expresar la consulta puede ser visto como una alternativa, por ejemplo buscar imágenes con teléfonos móviles, o dar soporte al diagnóstico médico con las imágenes de los pacientes. Aún así, emparejar correctamente las características visuales de la consulta y las imágenes en la base de datos puede llevar a resultados semánticamente incorrectos. Estas condiciones hacen que el proceso de buscar imágenes relevantes para una necesidad de información particular sea una tarea difícil, que consume mucho tiempo o que incluso puede ser frustrante. En lugar de considerar solo una modalidad de datos para construir índices de búsqueda para imágenes, el uso simultáneo de las modalidades visual y textual ha sido sugerido. Las modalidades no visuales pueden proporcionar información complementaria para enriquecer la representación de las imágenes. El objetivo de este trabajo de investigación es estudiar las relaciones entre los contenidos visuales y los términos textuales, para construir índices de búsqueda útiles. Este trabajo propone una familia de algoritmos basados en factorización de matrices para extraer los aspectos multimodales de una colección de imágenes. Utilizando este conocimiento acerca de cómo las características visuales se correlacionan con los términos textuales, se construye un índice que puede ser consultado con palabras clave, imágenes de ejemplo o por combinaciones de estas dos. Se realizaron experimentos sistemáticos en diferentes conjuntos de datos para evaluar los algoritmos de indexamiento propuestos. Los resultados muestran que el indexamiento multimodal es una estrategia efectiva para diseñar sistemas de búsqueda de imágenes.Doctorad

    Modelling Digital Media Objects

    Get PDF
    • …
    corecore