2,515 research outputs found

    Knowledge-rich Image Gist Understanding Beyond Literal Meaning

    Full text link
    We investigate the problem of understanding the message (gist) conveyed by images and their captions as found, for instance, on websites or news articles. To this end, we propose a methodology to capture the meaning of image-caption pairs on the basis of large amounts of machine-readable knowledge that has previously been shown to be highly effective for text understanding. Our method identifies the connotation of objects beyond their denotation: where most approaches to image understanding focus on the denotation of objects, i.e., their literal meaning, our work addresses the identification of connotations, i.e., iconic meanings of objects, to understand the message of images. We view image understanding as the task of representing an image-caption pair on the basis of a wide-coverage vocabulary of concepts such as the one provided by Wikipedia, and cast gist detection as a concept-ranking problem with image-caption pairs as queries. To enable a thorough investigation of the problem of gist understanding, we produce a gold standard of over 300 image-caption pairs and over 8,000 gist annotations covering a wide variety of topics at different levels of abstraction. We use this dataset to experimentally benchmark the contribution of signals from heterogeneous sources, namely image and text. The best result with a Mean Average Precision (MAP) of 0.69 indicate that by combining both dimensions we are able to better understand the meaning of our image-caption pairs than when using language or vision information alone. We test the robustness of our gist detection approach when receiving automatically generated input, i.e., using automatically generated image tags or generated captions, and prove the feasibility of an end-to-end automated process

    Webly Supervised Learning of Convolutional Networks

    Full text link
    We present an approach to utilize large amounts of web data for learning CNNs. Specifically inspired by curriculum learning, we present a two-step approach for CNN training. First, we use easy images to train an initial visual representation. We then use this initial CNN and adapt it to harder, more realistic images by leveraging the structure of data and categories. We demonstrate that our two-stage CNN outperforms a fine-tuned CNN trained on ImageNet on Pascal VOC 2012. We also demonstrate the strength of webly supervised learning by localizing objects in web images and training a R-CNN style detector. It achieves the best performance on VOC 2007 where no VOC training data is used. Finally, we show our approach is quite robust to noise and performs comparably even when we use image search results from March 2013 (pre-CNN image search era)

    A Survey of Volunteered Open Geo-Knowledge Bases in the Semantic Web

    Full text link
    Over the past decade, rapid advances in web technologies, coupled with innovative models of spatial data collection and consumption, have generated a robust growth in geo-referenced information, resulting in spatial information overload. Increasing 'geographic intelligence' in traditional text-based information retrieval has become a prominent approach to respond to this issue and to fulfill users' spatial information needs. Numerous efforts in the Semantic Geospatial Web, Volunteered Geographic Information (VGI), and the Linking Open Data initiative have converged in a constellation of open knowledge bases, freely available online. In this article, we survey these open knowledge bases, focusing on their geospatial dimension. Particular attention is devoted to the crucial issue of the quality of geo-knowledge bases, as well as of crowdsourced data. A new knowledge base, the OpenStreetMap Semantic Network, is outlined as our contribution to this area. Research directions in information integration and Geographic Information Retrieval (GIR) are then reviewed, with a critical discussion of their current limitations and future prospects

    Methods of Hierarchical Clustering

    Get PDF
    We survey agglomerative hierarchical clustering algorithms and discuss efficient implementations that are available in R and other software environments. We look at hierarchical self-organizing maps, and mixture models. We review grid-based clustering, focusing on hierarchical density-based approaches. Finally we describe a recently developed very efficient (linear time) hierarchical clustering algorithm, which can also be viewed as a hierarchical grid-based algorithm.Comment: 21 pages, 2 figures, 1 table, 69 reference

    Query expansion by relying on the structure of knowledge bases

    Get PDF
    Query expansion techniques aim at improving the results achieved by a user's query by means of introducing new expansion terms, called expansion features. Expansion features introduce new concepts that are semantically related with the concepts in the user's query and that allow retrieving documents that otherwise would be not. Thus, the challenge is to select those expansion features that are capable of improving the results the most. A bad choice of expansion features may be counterproductive. In this thesis, we use an external source of information, a Knowledge Base (KB), as source expansion features. A knowledge base consists of a set of entries, each of which represent a concept and has, at least, a name, which can be used as expansion feature. The techniques framed in this family have become more popular due to the increase of available data, as, for example, Wikipedia. Particularly, we focus on exploiting those KB whose entries are linked to each other, conforming a graph of entries. To the best of our knowledge, most of the techniques framed on the KB family rely on some kind of text analysis, such as explicit semantic analysis, or are based on other existing query expansion techniques such as pseudo relevance feedback. However, the underlying net-work structure of KBs has been barely exploited. In this thesis, we show that the structure can be used to identify reliable expansion feature for the query expansion process. Thus, we design a novel expansion technique, Structural Query Expansion (SQE). For SQE to benefit from the particular structures of KBs, we propose a methodology to identify the structural characteristics that, given a query, allow identifying those nodes in the KB that are good candidates to be used as source of expansion features, called from now on expansion nodes. The methodology consists in building a ground truth that connects each query from a query set with those nodes of the KB that when used to extract the expansion features allow achieving the best results in terms of precision, we call the set of those nodes, expansion query graph. Then, we compare the expansion query graph of each query to find shared characteristics. SQE materializes the revealed characteristics into a set of structural motifs. In the particular case of Wikipedia, we have found two motifs called triangular and square. In the former, the query node and the expansion node are doubly linked and the expansion node belongs to, at least, the same categories as the query node. In the latter, the query node and the expansion node also are doubly linked and their categories are connected somehow. These motifs are used to, given a query and its query nodes, identify all the expansion nodes which are used as source of expansion features. Notice that we have designed this technique to be orthogonal to others because is fully decoupled from the search process and does not depend on the particular collection of documents. We have tested our techniques with three different datasets to avoid any kind of overfitting. The results are shown to be consistent among the three of them. Also, the results which are validated with statistical significance tests, show that SQE is capable to achieve up to 150% improvement in the precision. Finally, we show the performance of our technique which runs in sub-second times (358.23ms at maximum) which makes it feasible for a real query expansion system. This is especially relevant because, to the best of our knowledge, the performance is an aspect that is being ignored in most of the works and, thus, it is difficult to know whether they can be include in real systems or not.Les tècniques d'expansió de consultes tenen com a objecte millorar els resultats obtinguts per la consulta d'un usuari a partir de la introducció de termes d'expansió, anomenat característiques d'expansió. Les característiques d'expansió introdueixen nous conceptes que estan relacionats semànticament amb els conceptes de la consulta de l'usuari i que permeten obtenir documents que d'altra manera no es podrien obtenir. Per tant, el repte és seleccionar les característiques d'expansió que són capaces de millorar al màxim els resultats, doncs una mala elecció pot ser contra-productiva. En aquesta tesis, utilitzem una font externa d'informació, una Base de Coneixement (KB), com a font de característiques d'expansió. Una KB és un conjunt d'entrades, cadascuna de les quals representa un concepte i que té, com a mínim, un nom, que és susceptible de ser usat com a característica d'expansió. Les tècniques emmarcades en aquesta família han esdevingut populars degut al creixement de la informació disponible, per exemple, Wikipedia. Particularment, nosaltres en centrem en utilitzar aquelles KB les entrades de les quals estan relacionades entre si, conformant d'aquesta manera, un graf d'entrades. Segons les nostres informacions, la majora de les tècniques emmarcades en aquesta família utilitzen algun tipus d'anàlisi lingüístic, o estan basades en d'altres tècniques com relevance feedback. Ara bé, la estructura subjacent de la xarxa gairebé no s'ha utilitzat. En aquesta tesis, mostrem que la estructura es pot fer servir per identificar característiques d'expansió fiables pel procés d'expansió de consultes. De fet, proposem una tècnica d'expansió novell, Structural Query Expansion (SQE), que la explota. Perquè SQE pugui beneficiar-se de les particularitats estructurals de les KBs, hem proposat també una metodologia per revelar les característiques estructurals que, donada una consulta, permeten identificar aquells nodes que són una bona font de característiques d'expansió, els anomenats, nodes d'expansió. Aquesta metodologia consisteix en construir un ground truth que relaciona una conjunt de consultes amb el seu optimal expansion query graph. L'optimal expansion query graph és el conjunt de nodes d'expansió que quan s'utilitzen com a font de característiques d'expansió, permeten obtenir els millors resultats en termes de precisió. Un cop tenim els optimal expansion query graphs, els comparem entre si per a buscar característiques compartides. SQE materialitza aquestes característiques en un conjunt de motius estructurals. En el cas de Wikipedia hem trobat 2 motius: el triangular i el quadràtic. En els dos casos el node de la consulta ha d'estar doblement lincat amb el node d'expansió. En el triangular, les categories del node d'expansió ha de pertànyer, com a mínim, a les mateixes categories que el node de la consulta, mentre que en el quadràtic tan sols cal que les categories del node de la consulta i el d'expansió estiguin relacionades. Aquest motius s'utilitzen per, donada una consulta, identificar tots els seus nodes d'expansió. Hem dissenyat aquesta tècnica com una tècnica ortogonal a d'altres ja que està desacoblada del procés de cerca i no depèn de la col·lecció de documents. Hem provar la nostra tècnica amb 3 jocs de dades diferents per a evitar qualsevol tipus d'especialització. Els resultats són consistents entre els tres. Hem validat els resultats amb testos de significança estadística obtenint millores del 150% en la precisió. Finalment, pel que fa el rendiment de la nostra proposta, mostrem que s'executa en mil·lisegons, i això la fa susceptible de ser utilitzada en sistemes d'expansió reals. Això és especialment rellevant perquè, segons les nostres informacions, aquest és un aspecte que s'ignora en la literatura i, per tant, és difícil de saber la viabilitat de les propostes que existeixen en entorns reals

    Entity-Oriented Search

    Get PDF
    This open access book covers all facets of entity-oriented search—where “search” can be interpreted in the broadest sense of information access—from a unified point of view, and provides a coherent and comprehensive overview of the state of the art. It represents the first synthesis of research in this broad and rapidly developing area. Selected topics are discussed in-depth, the goal being to establish fundamental techniques and methods as a basis for future research and development. Additional topics are treated at a survey level only, containing numerous pointers to the relevant literature. A roadmap for future research, based on open issues and challenges identified along the way, rounds out the book. The book is divided into three main parts, sandwiched between introductory and concluding chapters. The first two chapters introduce readers to the basic concepts, provide an overview of entity-oriented search tasks, and present the various types and sources of data that will be used throughout the book. Part I deals with the core task of entity ranking: given a textual query, possibly enriched with additional elements or structural hints, return a ranked list of entities. This core task is examined in a number of different variants, using both structured and unstructured data collections, and numerous query formulations. In turn, Part II is devoted to the role of entities in bridging unstructured and structured data. Part III explores how entities can enable search engines to understand the concepts, meaning, and intent behind the query that the user enters into the search box, and how they can provide rich and focused responses (as opposed to merely a list of documents)—a process known as semantic search. The final chapter concludes the book by discussing the limitations of current approaches, and suggesting directions for future research. Researchers and graduate students are the primary target audience of this book. A general background in information retrieval is sufficient to follow the material, including an understanding of basic probability and statistics concepts as well as a basic knowledge of machine learning concepts and supervised learning algorithms
    corecore