5 research outputs found

    Online sketch-based image retrieval using keyshape mining of geometrical objects

    Get PDF
    Online image retrieval has become an active information-sharing due to the massive use of the Internet. The key challenging problems are the semantic gap between the low-level visual features and high-semantic perception and interpretation, due to understating complexity of images and the hand-drawn query input representation which is not a regular input in addition to the huge amount of web images. Besides, the state-of-art research is highly desired to combine multiple types of different feature representations to close the semantic gap. This study developed a new schema to retrieve images directly from the web repository. It comprises three major phases. Firstly a new online input representation based on pixel mining to detect sketch shape features and correlate them with the semantic sketch objects meaning was designed. Secondly, training process was developed to obtain common templates using Singular Value Decomposition (SVD) technique to detect common sketch template. The outcome of this step is a sketch of variety templates dictionary. Lastly, the retrieval phase matched and compared the sketch with image repository using metadata annotation to retrieve the most relevant images. The sequence of processes in this schema converts the drawn input sketch to a string form which contains the sketch object elements. Then, the string is matched with the templates dictionary to specify the sketch metadata name. This selected name will be sent to a web repository to match and retrieve the relevant images. A series of experiments was conducted to evaluate the performance of the schema against the state of the art found in literature using the same datasets comprising one million images from FlickerIm and 0.2 million images from ImageNet. There was a significant retrieval in all cases of 100% precision for the first five retrieved images whereas the state of the art only achieved 88.8%. The schema has addressed many low features obstacles to retrieve more accurate images such as imperfect sketches, rotation, transpose and scaling. The schema has solved all these problems by using a high level semantic to retrieve accurate images from large databases and the web

    Fast vocabulary acquisition in an NMF-based self-learning vocal user interface

    Get PDF
    AbstractIn command-and-control applications, a vocal user interface (VUI) is useful for handsfree control of various devices, especially for people with a physical disability. The spoken utterances are usually restricted to a predefined list of phrases or to a restricted grammar, and the acoustic models work well for normal speech. While some state-of-the-art methods allow for user adaptation of the predefined acoustic models and lexicons, we pursue a fully adaptive VUI by learning both vocabulary and acoustics directly from interaction examples. A learning curve usually has a steep rise in the beginning and an asymptotic ceiling at the end. To limit tutoring time and to guarantee good performance in the long run, the word learning rate of the VUI should be fast and the learning curve should level off at a high accuracy. In order to deal with these performance indicators, we propose a multi-level VUI architecture and we investigate the effectiveness of alternative processing schemes. In the low-level layer, we explore the use of MIDA features (Mutual Information Discrimination Analysis) against conventional MFCC features. In the mid-level layer, we enhance the acoustic representation by means of phone posteriorgrams and clustering procedures. In the high-level layer, we use the NMF (Non-negative Matrix Factorization) procedure which has been demonstrated to be an effective approach for word learning. We evaluate and discuss the performance and the feasibility of our approach in a realistic experimental setting of the VUI-user learning context

    Multimodal information spaces for content-based image retrieval

    Get PDF
    Abstract. Image collections today are increasingly larger in size, and they continue to grow constantly. Without the help of image search systems these abundant visual records collected in many different fields and domains may remain unused and inaccessible. Many available image databases often contain complementary modalities, such as attached text resources, which can be used to build an index for querying with keywords. However, sometimes users do not have or do not know the right words to express what they need, and, in addition, keywords do not express all the visual variations that an image may contain. Using example images as queries can be viewed as an alternative in different scenarios such as searching images using a mobile phone with a coupled camera, or supporting medical diagnosis by searching a large medical image collection. Still, matching only visual features between the query and image databases may lead to undesirable results from the user's perspective. These conditions make the process of finding relevant images for a specific information need very challenging, time consuming or even frustrating. Instead of considering only a single data modality to build image search indexes, the simultaneous use of both, visual and text data modalities, has been suggested. Non-visual information modalities may provide complementary information to enrich the image representation. The goal of this research work is to study the relationships between visual contents and text terms to build useful indexes for image search. A family of algorithms based on matrix factorization are proposed for extracting the multimodal aspects from an image collection. Using this knowledge about how visual features and text terms correlate, a search index is constructed, which can be searched using keywords, example images or combinations of both. Systematic experiments were conducted on different data sets to evaluate the proposed indexing algorithms. The experimental results showed that multimodal indexing is an effective strategy for designing image search systems.Las colecciones de imágenes hoy en día son muy grandes y crecen constantemente. Sin la ayuda de sistemas para la búsqueda de imágenes esos abundantes registros visuales que han sido recolectados en diferentes areas del conocimiento pueden permanecer aislados sin uso. Muchas bases de datos de imágenes contienen modalidades de datos complementarias, como los recursos textuales que pueden ser utilizados para crear índices de búsqueda. Sin embargo, algunas veces los usuarios no tienen o no saben qué palabras utilizar para encontrar lo que necesitan, y adicionalmente, las palabras clave no expresan todas las variaciones visuales que una imagen puede tener. Utilizar imágenes de ejemplo para expresar la consulta puede ser visto como una alternativa, por ejemplo buscar imágenes con teléfonos móviles, o dar soporte al diagnóstico médico con las imágenes de los pacientes. Aún así, emparejar correctamente las características visuales de la consulta y las imágenes en la base de datos puede llevar a resultados semánticamente incorrectos. Estas condiciones hacen que el proceso de buscar imágenes relevantes para una necesidad de información particular sea una tarea difícil, que consume mucho tiempo o que incluso puede ser frustrante. En lugar de considerar solo una modalidad de datos para construir índices de búsqueda para imágenes, el uso simultáneo de las modalidades visual y textual ha sido sugerido. Las modalidades no visuales pueden proporcionar información complementaria para enriquecer la representación de las imágenes. El objetivo de este trabajo de investigación es estudiar las relaciones entre los contenidos visuales y los términos textuales, para construir índices de búsqueda útiles. Este trabajo propone una familia de algoritmos basados en factorización de matrices para extraer los aspectos multimodales de una colección de imágenes. Utilizando este conocimiento acerca de cómo las características visuales se correlacionan con los términos textuales, se construye un índice que puede ser consultado con palabras clave, imágenes de ejemplo o por combinaciones de estas dos. Se realizaron experimentos sistemáticos en diferentes conjuntos de datos para evaluar los algoritmos de indexamiento propuestos. Los resultados muestran que el indexamiento multimodal es una estrategia efectiva para diseñar sistemas de búsqueda de imágenes.Doctorad

    동적 멀티모달 데이터 학습을 위한 심층 하이퍼네트워크

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2015. 2. 장병탁.Recent advancements in information communication technology has led the explosive increase of data. Dissimilar to traditional data which are structured and unimodal, in particular, the characteristics of recent data generated from dynamic environments are summarized as high-dimensionality, multimodality, and structurelessness as well as huge-scale size. The learning from non-stationary multimodal data is essential for solving many difficult problems in artificial intelligence. However, despite many successful reports, existing machine learning methods have mainly focused on solving practical problems represented by large-scaled but static databases, such as image classification, tagging, and retrieval. Hypernetworks are a probabilistic graphical model representing empirical distribution, using a hypergraph structure that is a large collection of many hyperedges encoding the associations among variables. This representation allows the model to be suitable for characterizing the complex relationships between features with a population of building blocks. However, since a hypernetwork is represented by a huge combinatorial feature space, the model requires a large number of hyperedges for handling the multimodal large-scale data and thus faces the scalability problem. In this dissertation, we propose a deep architecture of hypernetworks for dealing with the scalability issue for learning from multimodal data with non-stationary properties such as videos, i.e., deep hypernetworks. Deep hypernetworks handle the issues through the abstraction at multiple levels using a hierarchy of multiple hypergraphs. We use a stochastic method based on Monte-Carlo simulation, a graph MC, for efficiently constructing hypergraphs representing the empirical distribution of the observed data. The structure of a deep hypernetwork continuously changes as the learning proceeds, and this flexibility is contrasted to other deep learning models. The proposed model incrementally learns from the data, thus handling the nonstationary properties such as concept drift. The abstract representations in the learned models play roles of multimodal knowledge on data, which are used for the content-aware crossmodal transformation including vision-language conversion. We view the vision-language conversion as a machine translation, and thus formulate the vision-language translation in terms of the statistical machine translation. Since the knowledge on the video stories are used for translation, we call this story-aware vision-language translation. We evaluate deep hypernetworks on large-scale vision-language multimodal data including benmarking datasets and cartoon video series. The experimental results show the deep hypernetworks effectively represent visual-linguistic information abstracted at multiple levels of the data contents as well as the associations between vision and language. We explain how the introduction of a hierarchy deals with the scalability and non-stationary properties. In addition, we present the story-aware vision-language translation on cartoon videos by generating scene images from sentences and descriptive subtitles from scene images. Furthermore, we discuss the meaning of our model for lifelong learning and the improvement direction for achieving human-level artificial intelligence.1 Introduction 1.1 Background and Motivation 1.2 Problems to be Addressed 1.3 The Proposed Approach and its Contribution 1.4 Organization of the Dissertation 2 RelatedWork 2.1 Multimodal Leanring 2.2 Models for Learning from Multimodal Data 2.2.1 Topic Model-Based Multimodal Leanring 2.2.2 Deep Network-based Multimodal Leanring 2.3 Higher-Order Graphical Models 2.3.1 Hypernetwork Models 2.3.2 Bayesian Evolutionary Learning of Hypernetworks 3 Multimodal Hypernetworks for Text-to-Image Retrievals 3.1 Overview 3.2 Hypernetworks for Multimodal Associations 3.2.1 Multimodal Hypernetworks 3.2.2 Incremental Learning of Multimodal Hypernetworks 3.3 Text-to-Image Crossmodal Inference 3.3.1 Representatation of Textual-Visual Data 3.3.2 Text-to-Image Query Expansion 3.4 Text-to-Image Retrieval via Multimodal Hypernetworks 3.4.1 Data and Experimental Settings 3.4.2 Text-to-Image Retrieval Performance 3.4.3 Incremental Learning for Text-to-Image Retrieval 3.5 Summary 4 Deep Hypernetworks for Multimodal Cocnept Learning from Cartoon Videos 4.1 Overview 4.2 Visual-Linguistic Concept Representation of Catoon Videos 4.3 Deep Hypernetworks for Modeling Visual-Linguistic Concepts 4.3.1 Sparse Population Coding 4.3.2 Deep Hypernetworks for Concept Hierarchies 4.3.3 Implication of Deep Hypernetworks on Cognitive Modeling 4.4 Learning of Deep Hypernetworks 4.4.1 Problem Space of Deep Hypernetworks 4.4.2 Graph Monte-Carlo Simulation 4.4.3 Learning of Concept Layers 4.4.4 Incremental Concept Construction 4.5 Incremental Concept Construction from Catoon Videos 4.5.1 Data Description and Parameter Setup 4.5.2 Concept Representation and Development 4.5.3 Character Classification via Concept Learning 4.5.4 Vision-Language Conversion via Concept Learning 4.6 Summary 5 Story-awareVision-LanguageTranslation usingDeepConcept Hiearachies 5.1 Overview 5.2 Vision-Language Conversion as a Machine Translation 5.2.1 Statistical Machine Translation 5.2.2 Vision-Language Translation 5.3 Story-aware Vision-Language Translation using Deep Concept Hierarchies 5.3.1 Story-aware Vision-Language Translation 5.3.2 Vision-to-Language Translation 5.3.3 Language-to-Vision Translation 5.4 Story-aware Vision-Language Translation on Catoon Videos 5.4.1 Data and Experimental Setting 5.4.2 Scene-to-Sentence Generation 5.4.3 Sentence-to-Scene Generation 5.4.4 Visual-Linguistic Story Summarization of Cartoon Videos 5.5 Summary 6 Concluding Remarks 6.1 Summary of the Dissertation 6.2 Directions for Further Research Bibliography 한글초록Docto

    An inter-domain supervision framework for collaborative clustering of data with mixed types.

    Get PDF
    We propose an Inter-Domain Supervision (IDS) clustering framework to discover clusters within diverse data formats, mixed-type attributes and different sources of data. This approach can be used for combined clustering of diverse representations of the data, in particular where data comes from different sources, some of which may be unreliable or uncertain, or for exploiting optional external concept set labels to guide the clustering of the main data set in its original domain. We additionally take into account possible incompatibilities in the data via an automated inter-domain compatibility analysis. Our results in clustering real data sets with mixed numerical, categorical, visual and text attributes show that the proposed IDS clustering framework gives improved clustering results compared to conventional methods, over a wide range of parameters. Thus the automatically extracted knowledge, in the form of seeds or constraints, obtained from clustering one domain, can provide additional knowledge to guide the clustering in another domain. Additional empirical evaluations further show that our approach, especially when using selective mutual guidance between domains, outperforms common baselines such as clustering either domain on its own or clustering all domains converted to a single target domain. Our approach also outperforms other specialized multiple clustering methods, such as the fully independent ensemble clustering and the tightly coupled multiview clustering, after they were adapted to the task of clustering mixed data. Finally, we present a real life application of our IDS approach to the cluster-based automated image annotation problem and present evaluation results on a benchmark data set, consisting of images described with their visual content along with noisy text descriptions, generated by users on the social media sharing website, Flickr
    corecore