54,484 research outputs found

    Exploring Topic-based Language Models for Effective Web Information Retrieval

    Get PDF
    The main obstacle for providing focused search is the relative opaqueness of search request -- searchers tend to express their complex information needs in only a couple of keywords. Our overall aim is to find out if, and how, topic-based language models can lead to more effective web information retrieval. In this paper we explore retrieval performance of a topic-based model that combines topical models with other language models based on cross-entropy. We first define our topical categories and train our topical models on the .GOV2 corpus by building parsimonious language models. We then test the topic-based model on TREC8 small Web data collection for ad-hoc search.Our experimental results show that the topic-based model outperforms the standard language model and parsimonious model

    Conceptual Spaces in Object-Oriented Framework

    Get PDF
    The aim of this paper is to show that the middle level of mental representations in a conceptual spaces framework is consistent with the OOP paradigm. We argue that conceptual spaces framework together with vague prototype theory of categorization appears to be the most suitable solution for modeling the cognitive apparatus of humans, and that the OOP paradigm can be easily and intuitively reconciled with this framework. First, we show that the prototypebased OOP approach is consistent with Gärdenfors’ model in terms of structural coherence. Second, we argue that the product of cloning process in a prototype-based model is in line with the structure of categories in Gärdenfors’ proposal. Finally, in order to make the fuzzy object-oriented model consistent with conceptual space, we demonstrate how to define membership function in a more cognitive manner, i.e. in terms of similarity to prototype

    Interactive Search and Exploration in Online Discussion Forums Using Multimodal Embeddings

    Get PDF
    In this paper we present a novel interactive multimodal learning system, which facilitates search and exploration in large networks of social multimedia users. It allows the analyst to identify and select users of interest, and to find similar users in an interactive learning setting. Our approach is based on novel multimodal representations of users, words and concepts, which we simultaneously learn by deploying a general-purpose neural embedding model. We show these representations to be useful not only for categorizing users, but also for automatically generating user and community profiles. Inspired by traditional summarization approaches, we create the profiles by selecting diverse and representative content from all available modalities, i.e. the text, image and user modality. The usefulness of the approach is evaluated using artificial actors, which simulate user behavior in a relevance feedback scenario. Multiple experiments were conducted in order to evaluate the quality of our multimodal representations, to compare different embedding strategies, and to determine the importance of different modalities. We demonstrate the capabilities of the proposed approach on two different multimedia collections originating from the violent online extremism forum Stormfront and the microblogging platform Twitter, which are particularly interesting due to the high semantic level of the discussions they feature
    corecore