5 research outputs found

    Un modÚle de recherche d'information agrégée basée sur les réseaux bayésiens dans des documents semi-structurés

    Get PDF
    Nous proposons un modĂšle de recherche d'information basĂ© sur les rĂ©seaux bayĂ©siens. Dans ce modĂšle, la requĂȘte de l'utilisateur dĂ©clenche un processus de propagation pour sĂ©lectionner les Ă©lĂ©ments pertinents. Dans notre modĂšle, nous cherchons Ă  renvoyer Ă  l'utilisateur un agrĂ©gat au lieu d'une liste d'Ă©lĂ©ments. En fait, l'agrĂ©gat formulĂ© Ă  partir d'un document est considĂ©rĂ© comme Ă©tant un ensemble d'Ă©lĂ©ments ou une unitĂ© d'information (portion d'un document) qui rĂ©pond le mieux Ă  la requĂȘte de l'utilisateur. Cet agrĂ©gat doit rĂ©pondre Ă  trois aspects Ă  savoir la pertinence, la non-redondance et la complĂ©mentaritĂ© pour qu'il soit qualifiĂ© comme une rĂ©ponse Ă  cette requĂȘte. L'utilitĂ© des agrĂ©gats retournĂ©s est qu'ils donnent Ă  l'utilisateur un aperçu sur le contenu informationnel de cette requĂȘte dans la collection de documents. Afin de valider notre modĂšle, nous l'avons Ă©valuĂ© dans le cadre de la campagne d'Ă©valuation INEX 2009 (utilisant plus que 2 666 000 documents XML de l'encyclopĂ©die en ligne WikipĂ©dia). Les expĂ©rimentations montrent l'intĂ©rĂȘt de cette approche en mettant en Ă©vidence l'impact de l'agrĂ©gation de tels Ă©lĂ©ments.The work described in this thesis are concerned with the aggregated search on XML elements. We propose new approaches to aggregating and pruning using different sources of evidence (content and structure). We propose a model based on Bayesian networks. The dependency relationships between query-terms and terms-elements are quantified by probability measures. In this model, the user's query triggers a propagation process to find XML elements. In our model, we search to return to the user an aggregate instead of a list of XML elements. In fact, the aggregate made from a document is considered an information unit (or a portion of this document) that best meets the user's query. This aggregate must meet three aspects namely relevance, non-redundancy and complementarity in order to answer the query. The value returned aggregates is that they give the user an overview of the information need in the collection

    Impact de la structure des documents XML sur le processus d'appariement dans le contexte de la recherche d'information semi-structurée

    Get PDF
    Nos travaux s'inscrivent dans le cadre de la recherche d'information sur documents semi-structurĂ©sde type XML. La recherche d'information structurĂ©e (RIS) a pour objectif de retourner des granules documentaires prĂ©cis rĂ©pondant aux besoins d'information exprimĂ©s par l'utilisateur au travers de requĂȘtes. Ces requĂȘtes permettent de spĂ©cifier, en plus des conditions de contenu, des contraintes structurelles sur la localisation de l'information recherchĂ©e. L'objectif de nos travaux est d'Ă©tudier l'apport de la structure des documents dans le processus d'appariement documents-requĂȘtes. Puisque les contraintes structurelles des requĂȘtes peuvent ĂȘtre reprĂ©sentĂ©es sous la forme d'un arbre et que, parallĂšlement, la structure du document, de nature hiĂ©rarchique, peut elle-mĂȘme utiliser le mĂȘme type de reprĂ©sentation, nous avons proposĂ© plusieurs modĂšles de mesure de la similaritĂ© entre ces deux structures. La mesure de la similaritĂ© entre deux structures arborescentes ayant Ă©tĂ© Ă©tudiĂ©e par le domaine de la thĂ©orie des graphes, nous avons tout d'abord cherchĂ© Ă  adapter les algorithmes de ce domaine Ă  notre problĂ©matique. Suite Ă  une Ă©tude approfondie de ces algorithmes au regard de la RIS, notre choix s'est portĂ© sur la distance d'Ă©dition entre arbres (Tree Edit Distance - TED). Cet algorithme permet, au travers de l'application rĂ©cursive de sĂ©quences de suppression et de substitution, de mesurer le degrĂ© d'isomorphisme (le degrĂ© de similaritĂ©) entre deux arbres. Constatant que ces algorithmes sont coĂ»teux en mĂ©moire et en calcul, nous avons cherchĂ© Ă  en rĂ©duire la complexitĂ© et le temps d'exĂ©cution au travers d'approches de rĂ©sumĂ© et de la mise en place d'un algorithme de TED au coĂ»t de complexitĂ© plus bas. Etant donnĂ© que la TED est normalement utilisĂ©e avec des coĂ»ts d'opĂ©ration fixes peut adaptĂ©s Ă  notre problĂ©matique, nous en avons Ă©galement proposĂ© de nouveaux basĂ©s sur la distance dans le graphe formĂ© par la grammaire des documents : la DTD. Notre deuxiĂšme proposition se base sur les ModĂšles de Langue. En recherche d'information, ces derniers sont utilisĂ©s afin de mesurer la pertinence au travers de la probabilitĂ© qu'un terme de la requĂȘte soit gĂ©nĂ©rĂ© par un document. Nous avons utilisĂ©s les ModĂšles de Langue pour mesurer, non pas la probabilitĂ© de pertinence du contenu, mais celle de la structure. Afin de former un vocabulaire document et requĂȘte Ă  mĂȘme d'ĂȘtre utilisĂ© par notre modĂšle de langue structurel nous avons utilisĂ© une technique de relaxation pondĂ©rĂ©e (la relaxation est le relĂąchement des contraintes). Nous avons Ă©galement proposĂ© une mĂ©thode pour apparier le contenu des documents et celui des requĂȘtes. L'appariement seul des structures Ă©tant insuffisant dans une problĂ©matique de recherche d'information : la pertinence d'un granule documentaire est jugĂ©e en prioritĂ© sur la pertinence de l'information textuelle qu'il contient. De ce fait, nous avons proposĂ© une approche de mesure de la pertinence de ce contenu. Notre mĂ©thode utilise la structure de l'arbre afin d'opĂ©rer une propagation de la pertinence du texte en prenant en compte l'environnement des Ă©lĂ©ments traversĂ©s ainsi que le contexte global du document. Nos diffĂ©rents modĂšles ont Ă©tĂ© expĂ©rimentĂ©s sur deux tĂąches de la campagne d'Ă©valuation de rĂ©fĂ©rence de notre domaine : Initiative for XML Retrieval. Cette campagne a pour but de permettre l'Ă©valuation de systĂšmes de recherche d'information XML dans un cadre normalisĂ©e et comporte plusieurs tĂąches fournissant des corpus, des mesures d'Ă©valuation, des requĂȘtes, et des jugements de pertinence. Nous avons Ă  ce propos participĂ© Ă  cette campagne en 2011.Pour nos expĂ©rimentations, les tĂąches que nous avons choisi d'utiliser sont : * La tĂąche SSCAS d'INEX 2005 qui utilise une collection d'articles scientifiques d'IEEE. Cette collection est orientĂ©e texte dans la mesure oĂč la structure exprimĂ©e dans les documents qu'elle contient est similaire Ă  celle d'un livre (paragraphe, sections). * La tĂąche Datacentric d'INEX 2010 dont la collection est extraite d'IMDB. Cette collection est orientĂ©e donnĂ©es dans la mesure oĂč les termes des documents sont trĂšs spĂ©cifiques et peu redondants et que la structure est porteuse de sens. Nos diffĂ©rentes expĂ©rimentations nous ont permis de montrer que le choix de la mĂ©thode d'appariement dĂ©pend de la collection considĂ©rĂ©e. Dans le cadre d'une collection orientĂ© texte, la structure peut ĂȘtre prise en compte de maniĂšre non stricte et plusieurs sous-arbres extraits du document peuvent ĂȘtre utilisĂ©s simultanĂ©ment pour Ă©valuer la similaritĂ© structurelle. Inversement, dans le cadre d'une collection orientĂ©e donnĂ©e, la prise en compte stricte de la structure est nĂ©cessaire. Etant donnĂ© que les Ă©lĂ©ments recherchĂ©s portent une sĂ©mantique, il est alors important de dĂ©tecter quelle partie du document est Ă  priori pertinente. La structure Ă  apparier doit ĂȘtre la plus prĂ©cise et minimale possible. Enfin, nos approches de mesures de la similaritĂ© structurelle se sont montrĂ©es performantes et ont amĂ©liorĂ© la pertinence des rĂ©sultats retournĂ©s par rapport Ă  l'Ă©tat de l'art, Ă  partir du moment oĂč la nature de la collection a Ă©tĂ© prise en compte dans la sĂ©lection des arbres structurels en entrĂ©e.The work presented in this PhD thesis concerns structured information retrieval and focuses on XML documents. Structured information retrieval (SIR) aims at returning to users document parts (instead of whole documents) relevant to their needs. Those needs are expressed by queries that can contain content conditions as well as structural constraints which are used to specify the location of the needed information. In this work, we are interested in the use of document structure in the retrieval process. We propose some approaches to evaluate the document-query structural similarity. Both query structural constraints and document structures can be represented as trees. Based on this observation we propose two models which aim at matching these tree structures. As tree matching is historically linked with graph theory, our first proposition is based on an adaptation of a solution from the graph theory. After conducting an in depth study of the existing graph theory algorithms, we choose to use Tree Edit Distance (TED), which measures isomorphism (tree similarity) as the minimal set of remove and replace operations to turn one tree to another. As the main drawback of TED algorithms is their time and space complexity, which impacts the overall matching runtime, we propose two ways to overcome these issues. First we propose a TED algorithm having a minimal space complexity overall. Secondly, as runtime is dependent on the input tree cardinality (size) we propose several summarization techniques. Finally, since TED is usually used to assess relatively similar trees and as TED efficiency strongly relies on its costs, we propose a novel way, based on the DTD of documents, to compute these costs. Our second proposition is based on language models which are considered as very effective IR models. Traditionally, they are use to assess the content similarity through the probability of a document model (build upon document terms) to generate the query. We take a different approach based purely on structure and consider the document and query vocabulary as a set of transitions between document structure labels. To build these vocabularies, we propose to extract and weight all the structural relationships through a relaxation process. Finally, as relevance of the returned search results is first assessed based on the content, we propose a content evaluation process which uses the document tree structure to propagate relevance: the relevance of a node is evaluated thanks to its leaves as well as with the document context and neighbour nodes content relevance. In order to validate our models we conduct some experiments on two data-sets from the reference evaluation campaign of our domain: Initiative for XML retrieval (INEX). INEX tracks provide documents collections, metrics and relevance judgments which can be used to assess and compare SIR models. The tracks we use are: * The INEX 2005 SSCAS track whose associated documents are scientific papers extracted from IEEE. We consider this collection to be text-oriented as the structure used is similar to the one we can find in a book. * The INEX 2010 Datacentric track which uses a set of documents extracted from the Internet Movie Database (IMDB) website. This collection is data-oriented as document terms are very specific while the structure carries semantic meaning. Our various experiments show that the matching strategy strongly relies on the document structure type. In text-oriented collections, the structure can be considered as non-strict and several subtrees can be simultaneously used to assess the relevance. On the opposite, structure from documents regarded as data-centered should be used as strictly as possible. The reason is that as elements labels carry semantic, documents structures contain relevant and useful information that the content does not necessarily provide. Finally, our structural similarity approaches improve relevance of the returned results compared to state-of-the-art approaches, as long as the collection nature is considered when extracting the input trees for the structural matching process

    A Probabilistic Framework for Information Modelling and Retrieval Based on User Annotations on Digital Objects

    Get PDF
    Annotations are a means to make critical remarks, to explain and comment things, to add notes and give opinions, and to relate objects. Nowadays, they can be found in digital libraries and collaboratories, for example as a building block for scientific discussion on the one hand or as private notes on the other. We further find them in product reviews, scientific databases and many "Web 2.0" applications; even well-established concepts like emails can be regarded as annotations in a certain sense. Digital annotations can be (textual) comments, markings (i.e. highlighted parts) and references to other documents or document parts. Since annotations convey information which is potentially important to satisfy a user's information need, this thesis tries to answer the question of how to exploit annotations for information retrieval. It gives a first answer to the question if retrieval effectiveness can be improved with annotations. A survey of the "annotation universe" reveals some facets of annotations; for example, they can be content level annotations (extending the content of the annotation object) or meta level ones (saying something about the annotated object). Besides the annotations themselves, other objects created during the process of annotation can be interesting for retrieval, these being the annotated fragments. These objects are integrated into an object-oriented model comprising digital objects such as structured documents and annotations as well as fragments. In this model, the different relationships among the various objects are reflected. From this model, the basic data structure for annotation-based retrieval, the structured annotation hypertext, is derived. In order to thoroughly exploit the information contained in structured annotation hypertexts, a probabilistic, object-oriented logical framework called POLAR is introduced. In POLAR, structured annotation hypertexts can be modelled by means of probabilistic propositions and four-valued logics. POLAR allows for specifying several relationships among annotations and annotated (sub)parts or fragments. Queries can be posed to extract the knowledge contained in structured annotation hypertexts. POLAR supports annotation-based retrieval, i.e. document and discussion search, by applying an augmentation strategy (knowledge augmentation, propagating propositions from subcontexts like annotations, or relevance augmentation, where retrieval status values are propagated) in conjunction with probabilistic inference, where P(d -> q), the probability that a document d implies a query q, is estimated. POLAR's semantics is based on possible worlds and accessibility relations. It is implemented on top of four-valued probabilistic Datalog. POLAR's core retrieval functionality, knowledge augmentation with probabilistic inference, is evaluated for discussion and document search. The experiments show that all relevant POLAR objects, merged annotation targets, fragments and content annotations, are able to increase retrieval effectiveness when used as a context for discussion or document search. Additional experiments reveal that we can determine the polarity of annotations with an accuracy of around 80%
    corecore