17 research outputs found

    Methods for ranking user-generated text streams: a case study in blog feed retrieval

    Get PDF
    User generated content are one of the main sources of information on the Web nowadays. With the huge amount of this type of data being generated everyday, having an efficient and effective retrieval system is essential. The goal of such a retrieval system is to enable users to search through this data and retrieve documents relevant to their information needs. Among the different retrieval tasks of user generated content, retrieving and ranking streams is one of the important ones that has various applications. The goal of this task is to rank streams, as collections of documents with chronological order, in response to a user query. This is different than traditional retrieval tasks where the goal is to rank single documents and temporal properties are less important in the ranking. In this thesis we investigate the problem of ranking user-generated streams with a case study in blog feed retrieval. Blogs, like all other user generated streams, have specific properties and require new considerations in the retrieval methods. Blog feed retrieval can be defined as retrieving blogs with a recurrent interest in the topic of the given query. We define three different properties of blog feed retrieval each of which introduces new challenges in the ranking task. These properties include: 1) term mismatch in blog retrieval, 2) evolution of topics in blogs and 3) diversity of blog posts. For each of these properties, we investigate its corresponding challenges and propose solutions to overcome those challenges. We further analyze the effect of our solutions on the performance of a retrieval system. We show that taking the new properties into account for developing the retrieval system can help us to improve state of the art retrieval methods. In all the proposed methods, we specifically pay attention to temporal properties that we believe are important information in any type of streams. We show that when combined with content-based information, temporal information can be useful in different situations. Although we apply our methods to blog feed retrieval, they are mostly general methods that are applicable to similar stream ranking problems like ranking experts or ranking twitter users

    Using language models in question answering

    Get PDF
    In this thesis, we describe a language model based approach to parts of a complete Question Answering (QA) system. It includes the processing of the natural language query as well as the retrieval of relevant documents, passages and sentences. The results show that the language model based modules in our QA system perform equally well or even better than current state-of-the-art systems. Due to the heavy use of fast statistical algorithms the main advantage of our system is an efficiency gain compared to the slower deep analysis linguistic methods used in other approaches. A second benefit of using language models is the ability to train them for new languages.In dieser Doktorarbeit wird ein Ansatz basierend auf statistischen Sprachmodellen für verschiedene Bestandteile eines kompletten Fragebeantwortungssystems beschrieben. Dies beinhaltet die Verarbeitung der natürlichsprachlichen Suchanfrage sowie die Suche nach relevanten Dokumenten, Textabschnitten und Sätzen. Die Ergebnisse der Arbeit zeigen, dass sprachmodellbasierte Methoden genauso gut oder sogar noch besser funktionieren, als derzeitige, moderne Systeme. Ein wesentlicher Vorteil des beschriebenen Systems liegt in der Nutzung schneller, statistischer Algorithmen gegenüber den vergleichsweise langsamen, tiefen linguistischen Analysen anderer Ansätze

    Increasing the Efficiency of High-Recall Information Retrieval

    Get PDF
    The goal of high-recall information retrieval (HRIR) is to find all, or nearly all, relevant documents while maintaining reasonable assessment effort. Achieving high recall is a key problem in the use of applications such as electronic discovery, systematic review, and construction of test collections for information retrieval tasks. State-of-the-art HRIR systems commonly rely on iterative relevance feedback in which human assessors continually assess machine learning-selected documents. The relevance of the assessed documents is then fed back to the machine learning model to improve its ability to select the next set of potentially relevant documents for assessment. In many instances, thousands of human assessments might be required to achieve high recall. These assessments represent the main cost of such HRIR applications. Therefore, their effectiveness in achieving high recall is limited by their reliance on human input when assessing the relevance of documents. In this thesis, we test different methods in order to improve the effectiveness and efficiency of finding relevant documents using state-of-the-art HRIR system. With regard to the effectiveness, we try to build a machine-learned model that retrieves relevant documents more accurately. For efficiency, we try to help human assessors make relevance assessments more easily and quickly via our HRIR system. Furthermore, we try to establish a stopping criteria for the assessment process so as to avoid excessive assessment. In particular, we hypothesize that total assessment effort to achieve high recall can be reduced by using shorter document excerpts (e.g., extractive summaries) in place of full documents for the assessment of relevance and using a high-recall retrieval system based on continuous active learning (CAL). In order to test this hypothesis, we implemented a high-recall retrieval system based on state-of-the-art implementation of CAL. This high-recall retrieval system could display either full documents or short document excerpts for relevance assessment. A search engine was also integrated into our system to provide assessors the option of conducting interactive search and judging. We conducted a simulation study, and separately, a 50-person controlled user study to test our hypothesis. The results of the simulation study show that judging even a single extracted sentence for relevance feedback may be adequate for CAL to achieve high recall. The results of the controlled user study confirmed that human assessors were able to find a significantly larger number of relevant documents within limited time when they used the system with paragraph-length document excerpts as opposed to full documents. In addition, we found that allowing participants to compose and execute their own search queries did not improve their ability to find relevant documents and, by some measures, impaired performance. Moreover, integrating sampling methods with active learning can yield accurate estimates of the number of relevant documents, and thus avoid excessive assessments

    Modeling user information needs on mobile devices: from recommendation to conversation

    Get PDF
    Recent advances in the development of mobile devices, equipped with multiple sensors, together with the availability of millions of applications have made these devices more pervasive in our lives than ever. The availability of the diverse set of sensors, as well as high computational power, enable information retrieval (IR) systems to sense a user’s context and personalize their results accordingly. Relevant studies show that people use their mobile devices to access information in a wide range of topics in various contextual situations, highlighting the fact that modeling user information need on mobile devices involves studying several means of information access. In this thesis, we study three major aspects of information access on mobile devices. First, we focus on proactive approaches to modeling users for venue suggestion. We investigate three methods of user modeling, namely, content-based, collaborative, and hybrid, focusing on personalization and context-awareness. We propose a two-phase collaborative ranking algorithm for leveraging users’ implicit feedback while incorporating temporal and geographical information into the model. We then extend our collaborative model to include multiple cross-venue similarity scores and combine it with our content-based approach to produce a hybrid recommendation. Second, we introduce and investigate a new task on mobile search, that is, unified mobile search. We take the first step in defining, studying, and modeling this task by collecting two datasets and conducting experiments on one of the main components of unified mobile search frameworks, that is target apps selection. To this end, we propose two neural approaches. Finally, we address the conversational aspect of mobile search where we propose an offline evaluation protocol and build a dataset for asking clarifying questions for conversational search. Also, we propose a retrieval framework consisting of three main components: question retrieval, question selection, and document retrieval. The experiments and analyses indicate that asking clarifying questions should be an essential part of a conversational system, resulting in high performance gain

    Cross-lingual question answering

    Get PDF
    Question Answering has become an intensively researched area in the last decade, being seen as the next step beyond Information Retrieval in the attempt to provide more concise and better access to large volumes of available information. Question Answering builds on Information Retrieval technology for a first touch of possible relevant data and uses further natural language processing techniques to search for candidate answers and to look for clues that accept or invalidate the candidates as right answers to the question. Though most of the research has been carried out in monolingual settings, where the question and the answer-bearing documents share the same natural language, current approaches concentrate on cross-language scenarios, where the question and the documents are in different languages. Known in this context and common with the Information Retrieval research are three methods of crossing the language barrier: by translating the question, by translating the documents or by aligning both the question and the documents to a common inter-lingual representation. We present a cross-lingual English to German Question Answering system, for both factoid and definition questions, using a German monolingual system and translating the questions from English to German. Two different techniques of translation are evaluated: • direct translation of the English input question into German and • transfer-based translation, by using an intermediate representation that captures the “meaning” of the original question and is translated into the target language. For both translation techniques two types of translation tools are used: bilingual dictionaries and machine translation. The intermediate representation captures the semantic meaning of the question in terms of Question Type (QType), Expected Answer Type (EAType) and Focus, information that steers the workflow of the question answering process. The German monolingual Question Answering system can answer both factoid and definition questions and is based on several premises: • facts and definitions are usually expressed locally at the level of a sentence and its surroundings; • proximity of concepts within a sentence can be related to their semantic dependency; • for factoid questions, redundancy of candidate answers is a good indicator of their suitability; • definitions of concepts are expressed using fixed linguistic structures such as appositions, modifiers, and abbreviation extensions. Extensive evaluations of the monolingual system have shown that the above mentioned hypothesis holds true in most of the cases when dealing with a fairly large collection of documents, like the one used in the CLEF evaluation forum.Innerhalb der letzten zehn Jahre hat sich Question Answering zu einem intensiv erforschten Themengebiet gewandelt, es stellt den nächsten Schritt des Information Retrieval dar, mit dem Bestreben einen präziseren Zugang zu großen Datenbeständen von verfügbaren Informationen bereitzustellen. Das Question Answering setzt auf die Information Retrieval-Technologie, um mögliche relevante Daten zu suchen, kombiniert mit weiteren Techniken zur Verarbeitung von natürlicher Sprache, um mögliche Antwortkandidaten zu identifizieren und diese anhand von Hinweisen oder Anhaltspunkten entsprechend der Frage als richtige Antwort zu akzeptieren oder als unpassend zu erklären. Während ein Großteil der Forschung den einsprachigen Kontext voraussetzt, wobei Frage- und Antwortdokumente ein und dieselbe Sprache teilen, konzentrieren sich aktuellere Ansätze auf sprachübergreifende Szenarien, in denen die Frage- und Antwortdokumente in unterschiedlichen Sprachen vorliegen. Im Kontext des Information Retrieval existieren drei bekannte Ansätze, die versuchen auf unterschiedliche Art und Weise die Sprachbarriere zu überwinden: durch die Übersetzung der Frage, durch die Übersetzung der Dokumente oder durch eine Angleichung von sowohl der Frage als auch der Dokumente zu einer gemeinsamen interlingualen Darstellung. Wir präsentieren ein sprachübergreifendes Question Answering System vom Englischen ins Deutsche, das sowohl für Faktoid- als auch für Definitionsfragen funktioniert. Dazu verwenden wir ein einsprachiges deutsches System und übersetzen die Fragen vom Englischen ins Deutsche. Zwei unterschiedliche Techniken der Übersetzung werden untersucht: • die direkte Übersetzung der englischen Fragestellung ins Deutsche und • die Abbildungs-basierte Übersetzung, die eine Zwischendarstellung verwendet, um die „Semantik“ der ursprünglichen Frage zu erfassen und in die Zielsprache zu übersetzen. Für beide aufgelisteten Übersetzungstechniken werden zwei Übersetzungsquellen verwendet: zweisprachige Wörterbücher und maschinelle Übersetzung. Die Zwischendarstellung erfasst die Semantik der Frage in Bezug auf die Art der Frage (QType), den erwarteten Antworttyp (EAType) und Fokus, sowie die Informationen, die den Ablauf des Frage-Antwort-Prozesses steuern. Das deutschsprachige Question Answering System kann sowohl Faktoid- als auch Definitionsfragen beantworten und basiert auf mehreren Prämissen: • Fakten und Definitionen werden in der Regel lokal auf Satzebene ausgedrückt; • Die Nähe von Konzepten innerhalb eines Satzes kann auf eine semantische Verbindung hinweisen; • Bei Faktoidfragen ist die Redundanz der Antwortkandidaten ein guter Indikator für deren Eignung; • Definitionen von Begriffen werden mit festen sprachlichen Strukturen ausgedrückt, wie Appositionen, Modifikatoren, Abkürzungen und Erweiterungen. Umfangreiche Auswertungen des einsprachigen Systems haben gezeigt, dass die oben genannten Hypothesen in den meisten Fällen wahr sind, wenn es um eine ziemlich große Sammlung von Dokumenten geht, wie bei der im CLEF Evaluationsforum verwendeten Version

    Language modeling approaches to question answering

    Get PDF
    In today’s environment of information overload, Question Answering (QA) is a critically important research area. QA is the task of automatically extracting a precise answer from one or more data sources to a question posed in natural language. A twostage strategy is typically adopted when designing a QA system; the first stage is an Information Retrieval (IR) process which returns a set of candidate documents relevant to the question and the second stage narrows the information contained in those passages down to a single response (sentence or entity) that answers the question, typically using Information Extraction (IE) or Natural Language Processing methods. This research proposes novel techniques for QA by enhancing the user’s original query with latent semantic information from the corpus. This enhanced query is then applied to both the first and second stages of the QA architecture. To build the enhanced query, we propose the Aspect-Based Relevance Language Model as an approach that uses statistical language modeling techniques to measure the likelihood of relevance of a concept (oraspect as defined by Probabilistic Latent Semantic Analysis) to a question. We then use terms from the aspects that have the highest likelihood of relevance to design a model for a semantic Question Context, which includes sense-disambiguated terms than amplify the user’s query. Question Context is incorporated into the first state of QA as query expansion to improve recall. We then derive a novel measure called Answer Credibility from the Question Context. Answer Credibility may be thought of as a statistical measure of the reliability of a candidate answer with respect to a question and the source text from which the candidate answer was derived. We incorporate Answer Credibility in the Answer Validation process; the answer with the highest score after the application of Answer Credibility is returned to the user. Our techniques show performance improvements over state-of-the-art approaches, and have the advantage that they use statistical techniques to derive semantic information to aid the process of QA.Ph.D., Information Science and Technology -- Drexel University, 200
    corecore