952 research outputs found

    Combining information seeking services into a meta supply chain of facts

    Get PDF
    The World Wide Web has become a vital supplier of information that allows organizations to carry on such tasks as business intelligence, security monitoring, and risk assessments. Having a quick and reliable supply of correct facts from perspective is often mission critical. By following design science guidelines, we have explored ways to recombine facts from multiple sources, each with possibly different levels of responsiveness and accuracy, into one robust supply chain. Inspired by prior research on keyword-based meta-search engines (e.g., metacrawler.com), we have adapted the existing question answering algorithms for the task of analysis and triangulation of facts. We present a first prototype for a meta approach to fact seeking. Our meta engine sends a user's question to several fact seeking services that are publicly available on the Web (e.g., ask.com, brainboost.com, answerbus.com, NSIR, etc.) and analyzes the returned results jointly to identify and present to the user those that are most likely to be factually correct. The results of our evaluation on the standard test sets widely used in prior research support the evidence for the following: 1) the value-added of the meta approach: its performance surpasses the performance of each supplier, 2) the importance of using fact seeking services as suppliers to the meta engine rather than keyword driven search portals, and 3) the resilience of the meta approach: eliminating a single service does not noticeably impact the overall performance. We show that these properties make the meta-approach a more reliable supplier of facts than any of the currently available stand-alone services

    Harmony and dissonance: organizing the people's voices on political controversies

    Get PDF
    The wikileaks documents about the death of Osama Bin Laden and the debates about the economic crisis in Greece and other European countries are some of the controversial topics being played on the news everyday. Each of these topics has many different aspects, and there is no absolute, simple truth in answering questions such as: should the EU guarantee the financial stability of each member country, or should the countries themselves be solely responsible? To understand the landscape of opinions, it would be helpful to know which politician or other stakeholder takes which position-support or opposition-on these aspects of controversial topics

    Extracting Causal Relations between News Topics from Distributed Sources

    Get PDF
    The overwhelming amount of online news presents a challenge called news information overload. To mitigate this challenge we propose a system to generate a causal network of news topics. To extract this information from distributed news sources, a system called Forest was developed. Forest retrieves documents that potentially contain causal information regarding a news topic. The documents are processed at a sentence level to extract causal relations and news topic references, these are the phases used to refer to a news topic. Forest uses a machine learning approach to classify causal sentences, and then renders the potential cause and effect of the sentences. The potential cause and effect are then classified as news topic references, these are the phrases used to refer to a news topics, such as “The World Cup” or “The Financial Meltdown”. Both classifiers use an algorithm developed within our working group, the algorithm performs better than several well known classification algorithms for the aforementioned tasks. In our evaluations we found that participants consider causal information useful to understand the news, and that while we can not extract causal information for all news topics, it is highly likely that we can extract causal relation for the most popular news topics. To evaluate the accuracy of the extractions made by Forest, we completed a user survey. We found that by providing the top ranked results, we obtained a high accuracy in extracting causal relations between news topics

    Finding answers to definition questions on the web

    Get PDF
    Fundamentally, question answering systems are designed for automatically responding to queries posed by users in natural language. The first step in the answering process is query analysis, and its goal is to classify the query in concert with a set of pre-specified types. Traditionally, these classes include: factoid, definition, and list. Systems thereafter chose the answering method in congruence with the class recognised in this early phase. In short, this thesis focuses exclusively on strategies to tackle definition questions (e.g.\u27; Who is Ben Bernanke?"). This sort of question has become especially interesting in recent years, due to its significant number of submissions to search engines. Most advances in definition question answering have been made under the umbrella of the Text REtrieval Conference (TREC). This is, more precisely, a framework for testing systems operating on a collection of news articles. Thus, the objective of chapter one is to describe this framework along with presenting additional introductory aspects of definition question answering including: (a) how definition questions are prompted by individuals; (b) the different conceptions of definition, and thus of answers; and (c) the various metrics exploited for assessing systems. Since the inception of TREC, systems have put to the test manifold approaches to discover answers, throwing some light onto several key aspects of this problem. On this account, chapter four goes over a selection of some notable TREC systems. This selection is not aimed at completeness, but rather at highlighting the leading features of these systems. For the most part, systems benefit from knowledge bases (e.g., Wikipedia) for obtaining descriptions about the concept being defined (a.k.a. definiendum). These descriptions are thereafter projected onto the array of candidate answers as a means of discerning the correct answer. In other words, these knowledge bases play the role of annotated resources, and most systems attempt to find the answer candidates across the collection of news articles that are more similar to these descriptions. The cornerstone of this thesis is the assumption that it is plausible to devise competitive, and hopefully better, systems without the necessity of annotated resources. Although this descriptive knowledge is helpful, it is the belief of the author that they are built on two wrong premises: 1.It is arguable that senses or contexts related to the definiendum across knowledge bases are the same senses or contexts for the instances across the array of answer candidates. This observation also extends to the fact that not all descriptions within the group of putative answers are necessarily covered by knowledge bases, even though they might refer to the same contexts or senses. 2.Finding an efficient projection strategy does not necessarily entail a good procedure for discerning descriptive knowledge, because it shifts the goal of the task to a more like this set" instead of analysing whether or not each candidate bears the characteristics of a description. In other words, the coverage given by knowledge bases for a specific definiendum is not wide enough to learn all the characteristics that typify its descriptions, so that systems are capable of identifying all answers within the set of candidates. From another angle, a conventional projection methodology can be seen as a finder of lexical analogies. All in all, this thesis investigates into models that disregard this kind of annotated resource and projection strategy. In effect, it is the belief of the author that a robust technique of this sort can be integrated with traditional projection methodologies, and in this way bringing about an enhancement in performance. The major contributions of this thesis are presented in chapters five, six and seven. There are several ways of understanding this structure. For example, chapter five presents a general framework for answering definition questions in several languages. The primary goal of this study is to design a lightweight definition question answering system operating on web-snippets and two languages: English and Spanish. The idea is to utilise web-snippets as a source of descriptive information in several languages, and the high degree of language independency is achieved by making allowances for as little linguistic knowledge as possible. To put it more precisely, this system accounts for statistical methods and a list of stop-words, as well as a set of language-dependent definition patterns. In detail, chapter five branches into two more specific studies. The first study is essentially aimed at capitalising on redundancy for detecting answers (e.g., word frequency counts across answer candidates). Although this type of feature has been widely used by TREC systems, this study focuses on its impact on different languages, and its benefits when applied to web-snippets instead of a collection of news documents. An additional motivation behind targeting web-snippets is the hope of studying systems working on more heterogenous corpora, without incurring the need of downloading full-documents. For instance, on the Internet, the number of distinct senses for the definiendum considerably increases, ergo making it necessary to consider a sense discrimination technique. For this purpose, the system presented in this chapter takes advantage of an unsupervised approach premised on Latent Semantic Analysis. Although the outcome of this study shows that sense discrimination is hard to achieve when operating solely on web snippets, it also reveals that they are a fruitful source of descriptive knowledge, and that their extraction poses exciting challenges. The second branch extends this first study by exploiting multilingual knowledge bases (i.e. Wikipedia) for ranking putative answers. Generally speaking, it makes use of word association norms deduced from sentences that match definitions patterns across Wikipedia. In order to adhere to the premise of not profiting from articles related to a specific definiendum, these sentences are anonymised by replacing the concept with a placeholder, and the word norms are learnt from all training sentences, instead of only from the Wikipedia page about the particular definiendum. The results of this study signify that this use of these resources can also be beneficial; in particular, they reveal that word association norms are a cost-efficient solution. However, the size of the corpus markedly decreases for languages different from English, thus indicating their insufficiency to design models for other languages. Later, chapter six gets more specific and deals only with the ranking of answer candidates in English. The reason for abandoning the idea of Spanish is the sparseness observed across both the redundancy from the Internet and the training material mined from Wikipedia. This sparseness is considerably greater than in the case of English, and it makes learning powerful statistical models more difficult. This chapter presents a novel way of modeling definitions grounded on n-gram language models inferred from the lexicalised dependency tree representation of the training material acquired in the study of chapter five. These models are contextual in the sense that they are built in relation to the semantic of the sentence. Generally, these semantics can be perceived as the distinct types of definienda (e.g., footballer, language, artist, disease, and tree). This study, in addition, investigates the effect of some features on these context models (i.e., named entities, and part-of-speech tags). Overall, the results obtained by this approach are encouraging, in particular in terms of increasing the accuracy of the pattern matching. However, in all likelihood, it was experimentally observed that a training corpus comprising only positive examples (descriptions) is not enough to achieve perfect accuracy, because these models cannot deduce the characteristics that typify non-descriptive content. More essential, as future work, context models give the chance to study how different contexts can be amalgamated (smoothed) in agreement with their semantic similarities in order to ameliorate the performance. Subsequently, chapter seven gets even more specific and it searches for the set of properties that can aid in discriminating descriptions from other kinds of texts. Note that this study regards all kinds of descriptions, including those mismatching definition patters. In so doing, Maximum Entropy models are constructed on top of an automatically acquired large-scale training corpus, which encompasses descriptions from Wikipedia and non-descriptions from the Internet. Roughly speaking, different models are constructed as a means of studying the impact of assorted properties: surface, named entities, part-of-speech tags, chunks, and more interestingly, attributes derived from the lexicalised dependency graphs. In general, results corroborate the efficiency of features taken from dependency graphs, especially the root node and n-gram paths. Experiments conducted on testing sets of various characteristics suggest that it is also plausible to find attributes that can port to other corpora. The second and the third are extra chapters. The former examines different strategies to trawl the Web for descriptive knowledge. In essence, this chapter touches on several strategies geared towards boosting the recall of descriptive sentences across web snippets, especially sentences that match widespread definition patterns. This is a side, but instrumental study to the core of this thesis, as it is necessary for systems targeted at the Internet to develop effective crawling techniques. On the contrary, chapter three has two goals: (a) presenting some components used by the strategies outlined in the last three chapters, this way helping to focus on key aspects of the ranking methodologies, and hence to clearly present the relevant aspects of approaches laid out in these three chapters; and (b) fleshing out some characteristics that make separating the genuine from the misleading answer candidates difficult; particularly, across sentences matching definition patterns. Chapter three is helpful for understanding part of the linguistic phenomena that the posterior chapters deal with. On a final note about the organisation of this thesis, since there is a myriad of techniques, chapter six and seven start dissecting the related work closer to each strategy. The main contribution of each chapter begins at section 6.5 and 7.6, respectively. These two sections start with a discussion and comparison between the proposed methods and the related work presented in their corresponding preceding sections. This organisation is directed at facilitating the contextualisation of the proposed approaches as there are different question answering systems with manifold characteristics.Frage-Antwort-Systeme sind im Wesentlichen dafĂŒr konzipiert, von Benutzern in natĂŒrlicher Sprache gestellte Anfragen automatisiert zu beantworten. Der erste Schritt im Beantwortungsprozess ist die Analyse der Anfrage, deren Ziel es ist, die Anfrage entsprechend einer Menge von vordefinierten Typen zu klassifizieren. Traditionell umfassen diese: Faktoid, Definition und Liste. Danach wĂ€hlten die Systeme dieser frĂŒhen Phase die Antwortmethode entsprechend der zuvor erkannten Klasse. Kurz gesagt konzentriert sich diese Arbeit ausschließlich auf Strategien zur Lösung von Fragen nach Definitionen (z.B. ,,emph{Wer ist Ben Bernanke?}"). Diese Art von Anfrage ist in den letzten Jahren besonders interessant geworden, weil sie in beachtlicher Zahl bei Suchmaschinen eingeht. Die meisten Fortschritte in Bezug auf die Beantwortung von Fragen nach Definitionen wurden unter dem Dach der Text Retrieval Conference (TREC) gemacht. Das ist, genauer gesagt, ein Framework zum Testen von Systemen, die mit einer Auswahl von Zeitungsartikeln arbeiten. Daher, zielt Kapitel eins auf eine Beschreibung dieses Rahmenwerks ab, zusammen mit einer Darstellung weiterer einfĂŒhrender Aspekte der Beantwortung von Definitionsanfragen. Diesen schließen u.a. ein: (a) wie Definitionsanfragen von Personen gestellt werden; (b) die unterschiedlichen Begriffe von Definition und folglich auch Antworten; und (c) die unterschiedlichen Metriken, die zur Bewertung von Systemen genutzt werden. Seit Anbeginn von TREC haben Systeme vielfĂ€ltige AnsĂ€tze, Antworten zu entdecken, auf die Probe gestellt und dabei eine Reihe von zentralen Aspekten dieses Problems beleuchtet. Aus diesem Grund behandelt Kapitel vier eine Auswahl einiger bekannter TREC Systeme. Diese Auswahl zielt nicht auf VollstĂ€ndigkeit ab, sondern darauf, die wesentlichen Merkmale dieser Systeme hervorzuheben. Zum grĂ¶ĂŸten Teil nutzen die Systeme Wissensbasen (wie z.B. Wikipedia), um Beschreibungen des zu definierenden Konzeptes (auch als Definiendum bezeichnet) zu erhalten. Diese Beschreibungen werden danach auf eine Reihe von möglichen Antworten projiziert, um auf diese Art die richtige Antwort zu ermitteln. Anders ausgedrĂŒckt nehmen diese Wissensbasen die Funktion von annotierten Ressourcen ein, wobei die meisten Systeme versuchen, die Antwortkandidaten in einer Sammlung von Zeitungsartikeln zu finden, die diesen Beschreibungen Ă€hnlicher sind. Den Grundpfeiler dieser Arbeit bildet die Annahme, dass es plausibel ist, ohne annotierte Ressourcen konkurrenzfĂ€hige, und hoffentlich bessere, Systeme zu entwickeln. Obwohl dieses deskriptive Wissen hilfreich ist, basieren sie nach Überzeugung des Autors auf zwei falschen Annahmen: 1. Es ist zweifelhaft, ob die Bedeutungen oder Kontexte, auf die sich das Definiendum bezieht, dieselben sind wie die der Instanzen in der Reihe der Antwortkandidaten. DarĂŒber hinaus erstreckt sich diese Beobachtung auch auf die Tatsache, dass nicht alle Beschreibungen innerhalb der Gruppe der mutmaßlichen Antworten notwendigerweise von Wissensbasen abgedeckt werden, auch wenn sie sich auf dieselben Bedeutungen und Kontexte beziehen. 2. Eine effiziente Projektionsstrategie zu finden bedeutet nicht notwendigerweise auch ein gutes Verfahren zur Feststellung von deskriptivem Wissen, denn es verschiebt die Zielsetzung der Aufgabe hin zu einem ,,mehr wie diese Menge" statt zu analysieren, ob jeder Kandidat den Charakteristika einer Beschreibung entspricht oder nicht. Anders ausgedrĂŒckt ist die Abdeckung, die durch Wissensbasen fĂŒr ein spezifisches Definiendum gegeben ist, nicht umfassend genug, um alle Charakteristika, die fĂŒr seine Beschreibungen kennzeichnend sind, zu erlernen, so dass die Systeme in der Lage sind, alle Antworten innerhalb der Kandidatenmenge zu identifizieren. Eine konventionelle Projektionsstrategie kann aus einem anderen Blickwinkel als Prozedur zum Finden lexikalischer Analogien betrachtet werden. Insgesamt untersucht diese Arbeit Modelle, die Strategien dieser Art in Verbindung mit annotierten Ressourcen und Projektion außer Acht lassen. TatsĂ€chlich ist es die Überzeugung des Autors, dass eine robuste Technik dieser Art mit traditionellen Methoden der Projektion integriert wird und so eine Leistungssteigerung ermöglichen kann. Die grĂ¶ĂŸeren BeitrĂ€ge dieser Arbeit werden in den Kapiteln fĂŒnf, sechs und sieben prĂ€sentiert. Es gibt mehrere Wege diese Struktur zu verstehen. Kapitel fĂŒnf, beispielsweise, prĂ€sentiert einen allgemeinen Rahmen fĂŒr die Beantwortung von Fragen nach Definitionen in mehreren Sprachen. Das primĂ€re Ziel dieser Studie ist es, ein leichtgewichtiges System zur Beantwortung von Fragen nach Definitionen zu entwickeln, das mit Web-Snippets und zwei Sprachen arbeitet: Englisch und Spanisch. Die Grundidee ist, von Web-Snippets als Quelle deskriptiver Information in mehreren Sprachen zu profitieren, wobei der hohe Grad an SprachunabhĂ€ngigkeit dadurch erreicht wird, dass so wenig linguistisches Wissen wie möglich berĂŒcksichtigt wird. Genauer gesagt berĂŒcksichtigt dieses System statistische Methoden und eine Liste von Stop-Wörtern sowie eine Reihe von sprach-spezifischen Definitionsmustern. Im Einzelnen teilt sich Kapitel fĂŒnf in zwei spezifischere Studien auf. Die erste Studie zielt im Grunde darauf ab, aus Redundanz fĂŒr die Ermittlung von Antworten Kapital zu schlagen (z.B. WorthĂ€ufigkeiten ĂŒber verschiedene Antwortkandidaten hinweg). Obwohl eine solche Eigenschaft unter TREC Systemen weit verbreitet ist, legt diese Studie den Schwerpunkt auf die Auswirkungen auf verschiedene Sprachen und auf ihre Vorteile bei der Anwendung auf Web-Snippets statt Zeitungsartikeln. Eine weitere Motivation dahinter, Web-Snippets ins Auge zu fassen, ist die Hoffnung, Systeme zu studieren, die mit heterogenen Corpora arbeiten, ohne es nötig zu machen, vollstĂ€ndige Dokumente herunter zu laden. Im Internet, beispielsweise, steigt die Zahl verschiedener Bedeutungen fĂŒr das Definiendum deutlich an, was es notwendig macht, eine Technik zur Unterscheidung von Bedeutungen in Betracht zu ziehen. Zu diesem Zweck nutzt das System, das in diesem Kapitel vorgestellt wird, einen unĂŒberwachten Ansatz, der auf der Latent Semantic Analysis basiert. Auch wenn das Ergebnis dieser Studie zeigt, dass die Unterscheidung von Bedeutungen allein anhand von Web-Snippets schwer zu erreichen ist, so lĂ€sst es doch auch erkennen, dass sie eine fruchtbare Quelle deskriptiven Wissens darstellen und dass ihre Extraktion spannende Herausforderungen bereithĂ€lt. Der zweite Teil erweitert diese erste Studie durch die Nutzung mehrsprachiger Wissensbasen (d.h. Wikipedia), um die möglichen Antworten in eine Rangfolge einzureihen. Allgemein ausgedrĂŒckt profitiert sie von Wortassoziationsnormen, die von SĂ€tzen gelernt werden, die ĂŒber Wikipedia hinweg zu Definitionsmustern passen. Um an der PrĂ€misse festzuhalten, keine Artikel mit Bezug auf eine spezifisches Definiendum zu nutzen, werden diese SĂ€tze anonymisiert, indem der Begriff mit einem Platzhalter ersetzt wird, und die Wortnormen werden von allen SĂ€tzen der Trainingsmenge gelernt, statt nur von dem Wikipedia-Artikel, der sich auf das spezielle Definiendum bezieht. Die Ergebnisse dieser Studie zeigen, dass diese Nutzung dieser Ressourcen ebenfalls vorteilhaft sein kann; speziell zeigen sie auf, dass Wortassoziationsnormen eine kosteneffiziente Lösung darstellen. Allerdings nehmen die CorpusgrĂ¶ĂŸen ĂŒber andere Sprachen als Englisch deutlich ab, was auf deren UnzulĂ€nglichkeit fĂŒr die Konstruktion von Modellen fĂŒr andere Sprachen hinweist. Kapitel sechs, weiter hinten, wird spezieller und handelt ausschließlich von der Einordnung von Antwortkandidaten in englischer Sprache in eine Rangfolge. Der Grund dafĂŒr, hier Spanisch außer Acht zu lassen, ist die geringe beobachtete Dichte, sowohl in Bezug auf redundante Information im Internet als auch in Bezug auf Trainingsmaterial, das von Wikipedia erworben wurde. Diese geringe Dichte ist deutlich stĂ€rker ausgeprĂ€gt als im Fall der englischen Sprache und erschwert das Erlernen mĂ€chtiger statischer Modelle. Dieses Kapitel prĂ€sentiert einen neuartigen Weg, Definitionen zu modellieren, die in n-gram Sprachmodellen verankert sind, die aus der lexikalisierten Darstellung des AbhĂ€ngigkeitsbaumes des in Kapitel fĂŒnf erworbenen Trainingsmaterials gelernt wurden. Diese Modelle sind kontextuell in dem Sinne, dass sie in Bezug auf die Semantikdes Satzes konstruiert werden. Im Allgemeinen können diese Semantiken als unterschiedliche Typen von Definienda betrachtet werden (z.B. Fußballer, Sprache, KĂŒnstler, Krankheit und Baum). Diese Studie untersucht zusĂ€tzlich die Auswirkungen einiger Eigenschaften (nĂ€mlich benannter EntitĂ€ten und Part-of-speech-Tags) auf diese Kontextmodelle. Insgesamt sind die Ergebnisse, die mit diesem Ansatz erhalten wurden, ermutigend, insbesondere in Bezug auf eine Steigerung der Genauigkeit des Musterabgleichs. Indes wurde höchstwahrscheinlich experimentell beobachtet, dass ein Trainingscorpus, das nur Positivbeispiele (Beschreibungen) enthĂ€lt, nicht ausreicht, um perfekte Genauigkeit zu erreichen, da diese Modelle die Charakteristika nicht ableiten können, die fĂŒr nicht-deskriptiven Inhalt kennzeichnend sind. FĂŒr die weitere Arbeit ermöglichen es Kontextmodelle zu untersuchen, wie unterschiedliche Kontexte in Übereinstimmung mit deren semantischen Ähnlichkeiten verschmolzen (geglĂ€ttet) werden können, um die Leistung zu verstĂ€rken. Kapitel sieben wird anschließend sogar noch spezieller und sucht nach der Menge von Eigenschaften, die dabei helfen kann, Beschreibungen von anderen Textarten zu unterscheiden. Dabei sollte beachtet werden, dass diese Studie alle Arten von Beschreibungen berĂŒcksichtigt, einschließlich derer, die Definitionsmustern nicht genĂŒgen. Dadurch werden Maximum-Entropy-Modelle konstruiert, die auf einen automatisch akquirierten Corpus von großem Umfang aufsetzen, der Beschreibungen von Wikipedia und Nicht-Beschreibungen aus dem Internet umfasst. Grob gesagt werden unterschiedliche Modelle konstruiert, um die Auswirkungen verschiedenerlei Merkmale zu untersuchen: OberflĂ€che, benannte EntitĂ€ten, Part-of-speech-Tags, Chunks und, noch interessanter, von den lexikalisierten AbhĂ€ngigkeitsgraphen abgeleitete Attribute. Im Allgemeinen bestĂ€tigen die Ergebnisse die Effizienz von Merkmalen, die AbhĂ€ngigkeitsgraphen entnommen sind, insbesondere Wurzelknoten und n-gram-Pfaden. Experimente, die mit verschiedenen Testmengen diverser Charakteristika durchgefĂŒhrt wurden, legen nahe, dass auch angenommen werden kann, dass Attribute gefunden werden, die sich auf andere Corpora ĂŒbertragen lassen. Es gibt zwei weitere Kapitel: zwei und drei. Ersteres untersucht unterschiedliche Strategien, das Netz nach deskriptivem Wissen zu durchforsten. Im Wesentlichen analysiert dieses Kapitel einige Strategien, die darauf abzielen, die Trefferquote (den Recall) deskriptiver SĂ€tze

    Mining Meaning from Wikipedia

    Get PDF
    Wikipedia is a goldmine of information; not just for its many readers, but also for the growing community of researchers who recognize it as a resource of exceptional scale and utility. It represents a vast investment of manual effort and judgment: a huge, constantly evolving tapestry of concepts and relations that is being applied to a host of tasks. This article provides a comprehensive description of this work. It focuses on research that extracts and makes use of the concepts, relations, facts and descriptions found in Wikipedia, and organizes the work into four broad categories: applying Wikipedia to natural language processing; using it to facilitate information retrieval and information extraction; and as a resource for ontology building. The article addresses how Wikipedia is being used as is, how it is being improved and adapted, and how it is being combined with other structures to create entirely new resources. We identify the research groups and individuals involved, and how their work has developed in the last few years. We provide a comprehensive list of the open-source software they have produced.Comment: An extensive survey of re-using information in Wikipedia in natural language processing, information retrieval and extraction and ontology building. Accepted for publication in International Journal of Human-Computer Studie

    Exploratory Search on Mobile Devices

    Get PDF
    The goal of this thesis is to provide a general framework (MobEx) for exploratory search especially on mobile devices. The central part is the design, implementation, and evaluation of several core modules for on-demand unsupervised information extraction well suited for exploratory search on mobile devices and creating the MobEx framework. These core processing elements, combined with a multitouch - able user interface specially designed for two families of mobile devices, i.e. smartphones and tablets, have been finally implemented in a research prototype. The initial information request, in form of a query topic description, is issued online by a user to the system. The system then retrieves web snippets by using standard search engines. These snippets are passed through a chain of NLP components which perform an ondemand or ad-hoc interactive Query Disambiguation, Named Entity Recognition, and Relation Extraction task. By on-demand or ad-hoc we mean the components are capable to perform their operations on an unrestricted open domain within special time constraints. The result of the whole process is a topic graph containing the detected associated topics as nodes and the extracted relation ships as labelled edges between the nodes. The Topic Graph is presented to the user in different ways depending on the size of the device she is using. Various evaluations have been conducted that help us to understand the potentials and limitations of the framework and the prototype

    Finding answers to questions, in text collections or web, in open domain or specialty domains

    Get PDF
    International audienceThis chapter is dedicated to factual question answering, i.e. extracting precise and exact answers to question given in natural language from texts. A question in natural language gives more information than a bag of word query (i.e. a query made of a list of words), and provides clues for finding precise answers. We will first focus on the presentation of the underlying problems mainly due to the existence of linguistic variations between questions and their answerable pieces of texts for selecting relevant passages and extracting reliable answers. We will first present how to answer factual question in open domain. We will also present answering questions in specialty domain as it requires dealing with semi-structured knowledge and specialized terminologies, and can lead to different applications, as information management in corporations for example. Searching answers on the Web constitutes another application frame and introduces specificities linked to Web redundancy or collaborative usage. Besides, the Web is also multilingual, and a challenging problem consists in searching answers in target language documents other than the source language of the question. For all these topics, we present main approaches and the remaining problems

    A Data Mining Toolbox for Collaborative Writing Processes

    Get PDF
    Collaborative writing (CW) is an essential skill in academia and industry. Providing support during the process of CW can be useful not only for achieving better quality documents, but also for improving the CW skills of the writers. In order to properly support collaborative writing, it is essential to understand how ideas and concepts are developed during the writing process, which consists of a series of steps of writing activities. These steps can be considered as sequence patterns comprising both time events and the semantics of the changes made during those steps. Two techniques can be combined to examine those patterns: process mining, which focuses on extracting process-related knowledge from event logs recorded by an information system; and semantic analysis, which focuses on extracting knowledge about what the student wrote or edited. This thesis contributes (i) techniques to automatically extract process models of collaborative writing processes and (ii) visualisations to describe aspects of collaborative writing. These two techniques form a data mining toolbox for collaborative writing by using process mining, probabilistic graphical models, and text mining. First, I created a framework, WriteProc, for investigating collaborative writing processes, integrated with the existing cloud computing writing tools in Google Docs. Secondly, I created new heuristic to extract the semantic nature of text edits that occur in the document revisions and automatically identify the corresponding writing activities. Thirdly, based on sequences of writing activities, I propose methods to discover the writing process models and transitional state diagrams using a process mining algorithm, Heuristics Miner, and Hidden Markov Models, respectively. Finally, I designed three types of visualisations and made contributions to their underlying techniques for analysing writing processes. All components of the toolbox are validated against annotated writing activities of real documents and a synthetic dataset. I also illustrate how the automatically discovered process models and visualisations are used in the process analysis with real documents written by groups of graduate students. I discuss how the analyses can be used to gain further insight into how students work and create their collaborative documents

    Aggregated search: a new information retrieval paradigm

    Get PDF
    International audienceTraditional search engines return ranked lists of search results. It is up to the user to scroll this list, scan within different documents and assemble information that fulfill his/her information need. Aggregated search represents a new class of approaches where the information is not only retrieved but also assembled. This is the current evolution in Web search, where diverse content (images, videos, ...) and relational content (similar entities, features) are included in search results. In this survey, we propose a simple analysis framework for aggregated search and an overview of existing work. We start with related work in related domains such as federated search, natural language generation and question answering. Then we focus on more recent trends namely cross vertical aggregated search and relational aggregated search which are already present in current Web search
    • 

    corecore