73 research outputs found

    DescribeX: A Framework for Exploring and Querying XML Web Collections

    Full text link
    This thesis introduces DescribeX, a powerful framework that is capable of describing arbitrarily complex XML summaries of web collections, providing support for more efficient evaluation of XPath workloads. DescribeX permits the declarative description of document structure using all axes and language constructs in XPath, and generalizes many of the XML indexing and summarization approaches in the literature. DescribeX supports the construction of heterogeneous summaries where different document elements sharing a common structure can be declaratively defined and refined by means of path regular expressions on axes, or axis path regular expression (AxPREs). DescribeX can significantly help in the understanding of both the structure of complex, heterogeneous XML collections and the behaviour of XPath queries evaluated on them. Experimental results demonstrate the scalability of DescribeX summary refinements and stabilizations (the key enablers for tailoring summaries) with multi-gigabyte web collections. A comparative study suggests that using a DescribeX summary created from a given workload can produce query evaluation times orders of magnitude better than using existing summaries. DescribeX's light-weight approach of combining summaries with a file-at-a-time XPath processor can be a very competitive alternative, in terms of performance, to conventional fully-fledged XML query engines that provide DB-like functionality such as security, transaction processing, and native storage.Comment: PhD thesis, University of Toronto, 2008, 163 page

    Techniques efficaces basées sur des vues matérialisées pour la gestion des données du Web (algorithmes et systèmes)

    Get PDF
    Le langage XML, proposé par le W3C, est aujourd hui utilisé comme un modèle de données pour le stockage et l interrogation de grands volumes de données dans les systèmes de bases de données. En dépit d importants travaux de recherche et le développement de systèmes efficace, le traitement de grands volumes de données XML pose encore des problèmes des performance dus à la complexité et hétérogénéité des données ainsi qu à la complexité des langages courants d interrogation XML. Les vues matérialisées sont employées depuis des décennies dans les bases de données afin de raccourcir les temps de traitement des requêtes. Elles peuvent être considérées les résultats de requêtes pré-calculées, que l on réutilise afin d éviter de recalculer (complètement ou partiellement) une nouvelle requête. Les vues matérialisées ont fait l objet de nombreuses recherches, en particulier dans le contexte des entrepôts des données relationnelles.Cette thèse étudie l applicabilité de techniques de vues matérialisées pour optimiser les performances des systèmes de gestion de données Web, et en particulier XML, dans des environnements distribués. Dans cette thèse, nos apportons trois contributions.D abord, nous considérons le problème de la sélection des meilleures vues à matérialiser dans un espace de stockage donné, afin d améliorer la performance d une charge de travail des requêtes. Nous sommes les premiers à considérer un sous-langage de XQuery enrichi avec la possibilité de sélectionner des noeuds multiples et à de multiples niveaux de granularités. La difficulté dans ce contexte vient de la puissance expressive et des caractéristiques du langage des requêtes et des vues, et de la taille de l espace de recherche de vues que l on pourrait matérialiser.Alors que le problème général a une complexité prohibitive, nous proposons et étudions un algorithme heuristique et démontrer ses performances supérieures par rapport à l état de l art.Deuxièmement, nous considérons la gestion de grands corpus XML dans des réseaux pair à pair, basées sur des tables de hachage distribuées. Nous considérons la plateforme ViP2P dans laquelle des vues XML distribuées sont matérialisées à partir des données publiées dans le réseau, puis exploitées pour répondre efficacement aux requêtes émises par un pair du réseau. Nous y avons apporté d importantes optimisations orientées sur le passage à l échelle, et nous avons caractérisé la performance du système par une série d expériences déployées dans un réseau à grande échelle. Ces expériences dépassent de plusieurs ordres de grandeur les systèmes similaires en termes de volumes de données et de débit de dissémination des données. Cette étude est à ce jour la plus complète concernant une plateforme de gestion de contenus XML déployée entièrement et testée à une échelle réelle.Enfin, nous présentons une nouvelle approche de dissémination de données dans un système d abonnements, en présence de contraintes sur les ressources CPU et réseau disponibles; cette approche est mise en oeuvre dans le cadre de notre plateforme Delta. Le passage à l échelle est obtenu en déchargeant le fournisseur de données de l effort de répondre à une partie des abonnements. Pour cela, nous tirons profit de techniques de réécriture de requêtes à l aide de vues afin de diffuser les données de ces abonnements, à partir d autres abonnements.Notre contribution principale est un nouvel algorithme qui organise les vues dans un réseau de dissémination d information multi-niveaux ; ce réseau est calculé à l aide d outils techniques de programmation linéaire afin de passer à l échelle pour de grands nombres de vues, respecter les contraintes de capacité du système, et minimiser les délais de propagation des information. L efficacité et la performance de notre algorithme est confirmée par notre évaluation expérimentale, qui inclut l étude d un déploiement réel dans un réseau WAN.XML was recommended by W3C in 1998 as a markup language to be used by device- and system-independent methods of representing information. XML is nowadays used as a data model for storing and querying large volumes of data in database systems. In spite of significant research and systems development, many performance problems are raised by processing very large amounts of XML data. Materialized views have long been used in databases to speed up queries. Materialized views can be seen as precomputed query results that can be re-used to evaluate (part of) another query, and have been a topic of intensive research, in particular in the context of relational data warehousing. This thesis investigates the applicability of materialized views techniques to optimize the performance of Web data management tools, in particular in distributed settings, considering XML data and queries. We make three contributions.We first consider the problem of choosing the best views to materialize within a given space budget in order to improve the performance of a query workload. Our work is the first to address the view selection problem for a rich subset of XQuery. The challenges we face stem from the expressive power and features of both the query and view languages and from the size of the search space of candidate views to materialize. While the general problem has prohibitive complexity, we propose and study a heuristic algorithm and demonstrate its superior performance compared to the state of the art.Second, we consider the management of large XML corpora in peer-to-peer networks, based on distributed hash tables (or DHTs, in short). We consider a platform leveraging distributed materialized XML views, defined by arbitrary XML queries, filled in with data published anywhere in the network, and exploited to efficiently answer queries issued by any network peer. This thesis has contributed important scalability oriented optimizations, as well as a comprehensive set of experiments deployed in a country-wide WAN. These experiments outgrow by orders of magnitude similar competitor systems in terms of data volumes and data dissemination throughput. Thus, they are the most advanced in understanding the performance behavior of DHT-based XML content management in real settings.Finally, we present a novel approach for scalable content-based publish/subscribe (pub/sub, in short) in the presence of constraints on the available computational resources of data publishers. We achieve scalability by off-loading subscriptions from the publisher, and leveraging view-based query rewriting to feed these subscriptions from the data accumulated in others. Our main contribution is a novel algorithm for organizing subscriptions in a multi-level dissemination network in order to serve large numbers of subscriptions, respect capacity constraints, and minimize latency. The efficiency and effectiveness of our algorithm are confirmed through extensive experiments and a large deployment in a WAN.PARIS11-SCD-Bib. électronique (914719901) / SudocSudocFranceF

    Query-driven indexing in large-scale distributed systems

    Get PDF
    Efficient and effective search in large-scale data repositories requires complex indexing solutions deployed on a large number of servers. Web search engines such as Google and Yahoo! already rely upon complex systems to be able to return relevant query results and keep processing times within the comfortable sub-second limit. Nevertheless, the exponential growth of the amount of content on the Web poses serious challenges with respect to scalability. Coping with these challenges requires novel indexing solutions that not only remain scalable but also preserve the search accuracy. In this thesis we introduce and explore the concept of query-driven indexing – an index construction strategy that uses caching techniques to adapt to the querying patterns expressed by users. We suggest to abandon the strict difference between indexing and caching, and to build a distributed indexing structure, or a distributed cache, such that it is optimized for the current query load. Our experimental and theoretical analysis shows that employing query-driven indexing is especially beneficial when the content is (geographically) distributed in a Peer-to-Peer network. In such a setting extensive bandwidth consumption has been identified as one of the major obstacles for efficient large-scale search. Our indexing mechanisms combat this problem by maintaining the query popularity statistics and by indexing (caching) intermediate query results that are requested frequently. We present several indexing strategies for processing multi-keyword and XPath queries over distributed collections of textual and XML documents respectively. Experimental evaluations show significant overall traffic reduction compared to the state-of-the-art approaches. We also study possible query-driven optimizations for Web search engine architectures. Contrary to the Peer-to-Peer setting, Web search engines use centralized caching of query results to reduce the processing load on the main index. We analyze real search engine query logs and show that the changes in query traffic that such a results cache induces fundamentally affect indexing performance. In particular, we study its impact on index pruning efficiency. We show that combination of both techniques enables efficient reduction of the query processing costs and thus is practical to use in Web search engines

    Repetitive querying of large random heterogeneous datasets in RDBMS using materialized views

    Get PDF
    A methodology has been developed to increase time efficiency of querying large heterogeneous datasets repetitively by applying materialized views on repetitive complex queries. Additionally, a simple user interface is provided to demonstrate the utility of this research methodology. The programs demonstrate sufficiently that the core design can be used to deploy a complete system which could be used in different domains. The methodology as developed in this research is presented as an experimental proof-of-concept prototype based on an abstract design

    Querying heterogeneous data in NoSQL document stores

    Get PDF
    La problématique de cette thèse porte sur l'interrogation de données hétérogènes dans les systèmes de stockage "not-only SQL" (noSQL) orientés documents. Ces derniers ont connu un important développement ces dernières années en raison de leur capacité à gérer de manière flexible et efficace d'importantes masses de documents. Ils reposent sur le principe "schema-less" consistant à ne plus considérer un schéma unique pour un ensemble de données, appelé collection de documents. Cette flexibilité dans la structuration des données complexifie l'interrogation pour les utilisateurs qui doivent connaître l'ensemble des différents schémas des données manipulées lors de l'écriture de requêtes. Les travaux développés dans cette thèse sont menés dans le cadre du projet neoCampus. Ils se focalisent sur l'interrogation de documents structurellement hétérogènes, en particulier sur le problème de schémas variables. Nous proposons la construction d'un dictionnaire de données qui permet de retrouver tous les schémas des documents. Chaque clef, entrée du dictionnaire, correspond à un chemin absolu ou partiel existant dans au moins un document de la collection. Cette clef est associée aux différents chemins absolus correspondants dans l'ensemble de la collection de documents. Le dictionnaire est alors exploité pour réécrire de manière automatique et transparente les requêtes des utilisateurs. Les requêtes utilisateurs sont établies sur la base des clés du dictionnaire (chemins partiels ou absolus) et sont automatiquement réécrites en exploitant le dictionnaire afin de prendre en compte l'ensemble des chemins absolus existants dans les documents de la collection. Dans cette thèse, nous menons une étude de l'état de l'art des travaux s'attachant à résoudre l'interrogation de documents structurellement hétérogènes, et nous en proposons une classification. Ensuite, nous comparons ces travaux en fonction de critères qui permettent de positionner et différencier notre contribution. Nous définissions formellement les concepts classiques liés aux systèmes orientés documents (document, collection, etc), puis nous étendons cette formalisation par des concepts supplémentaires : chemins absolus et partiels, schémas de document, dictionnaire. Pour la manipulation et l'interrogation des documents, nous définissons un noyau algébrique minimal fermé composé de cinq opérateurs : sélection, projection, des-imbrication (unnest), agrégation et jointure (left-join). Nous définissons chaque opérateur et expliquons son évaluation par un moteur de requête classique. Ensuite, nous établissons la réécriture de chacun des opérateurs à partir du dictionnaire. Nous définissons le processus de réécriture des requêtes utilisateurs qui produit une requête évaluable par un moteur de requête classique en conservant la logique des opérateurs classiques (chemins inexistants, valeurs nulles). Nous montrons comment la réécriture d'une requête initialement construite avec des chemins partiels et/ou absolus permet de résoudre le problème d'hétérogénéité structurelle des documents. Enfin, nous menons des expérimentations afin de valider les concepts formels que nous introduisons tout au long de cette thèse. Nous évaluons la construction et la maintenance du dictionnaire en changeant la configuration en termes de nombre de structures par collection étudiée et de taille de collection. Puis, nous évaluons le moteur de réécriture de requêtes en le comparant à une évaluation de requête dans un contexte sans hétérogénéité structurelle puis dans un contexte de multi-requêtes. Toutes nos expérimentations ont été menées sur des collection synthétiques avec plusieurs niveaux d'imbrications, différents nombres de structure par collection, et différentes tailles de collections. Récemment, nous avons intégré notre contribution dans le projet neOCampus afin de gérer l'hétérogénéité lors de l'interrogation des données de capteurs implantés dans le campus de l'université Toulouse III-Paul Sabatier.This thesis discusses the problems related to querying heterogeneous data in document-oriented systems. Document-oriented "not-only SQL" (noSQL) storage systems have undergone significant development in recent years due to their ability to manage large amounts of documents in a flexible and efficient manner. These systems rely on the "schema-less" concept where no there is no requirement to consider a single schema for a set of data, called a collection of documents. This flexibility in data structures makes the query formulation more complex and users need to know all the different schemas of the data manipulated during the query formulation. The work developed in this thesis subscribes into the frame of neOCampus project. It focuses on issues in the manipulation and the querying of structurally heterogeneous document collections, mainly the problem of variable schemas. We propose the construction of a dictionary of data that makes it possible to find all the schemas of the documents. Each key, a dictionary entry, corresponds to an absolute or partial path existing in at least one document of the collection. This key is associated to all the corresponding absolute paths throughout the collection of heterogeneous documents. The dictionary is then exploited to automatically and transparently reformulate queries from users. The user queries are formulated using the dictionary keys (partial or absolute paths) and are automatically reformulated using the dictionary to consider all the existing paths in all documents in the collection. In this thesis, we conduct a state-of-the-art survey of the work related to solving the problem of querying data of heterogeneous structures, and we propose a classification. Then, we compare these works according to criteria that make it possible to position our contribution. We formally define the classical concepts related to document-oriented systems (document, collection, etc). Then, we extend this formalisation with additional concepts: absolute and partial paths, document schemas, dictionary. For manipulating and querying heterogeneous documents, we define a closed minimal algebraic kernel composed of five operators: selection, projection, unnest, aggregation and join (left join). We define each operator and explain its classical evaluation by the native document querying engine. Then we establish the reformulation rules of each of these operators based on the use of the dictionary. We define the process of reformulating user queries that produces a query that can be evaluated by most document querying engines while keeping the logic of the classical operators (misleading paths, null values). We show how the reformulation of a query initially constructed with partial and/or absolute paths makes it possible to solve the problem of structural heterogeneity of documents. Finally, we conduct experiments to validate the formal concepts that we introduce throughout this thesis. We evaluate the construction and maintenance of the dictionary by changing the configuration in terms of number of structures per collection studied and collection size. Then, we evaluate the query reformulation engine by comparing it to a query evaluation in a context without structural heterogeneity and then in a context of executing multiple queries. All our experiments were conducted on synthetic collections with several levels of nesting, different numbers of structures per collection, and on varying collection sizes. Recently, we deployed our contributions in the neOCampus project to query heterogeneous sensors data installed at different classrooms and the library at the campus of the university of Toulouse III-Paul Sabatier

    Βελτιστοποίηση ερωτημάτων χρησιμοποιώντας σημασιολογία πολυσυνόλου και συνόλου-πολυσυνόλου σε περιβάλλον ετερογενών πηγών πληροφόρησης

    Get PDF
    184 σ.Στην συγκεκριμένη διατριβή, μελετάμε ανάπτυξη τεχνικών βελτιστοποίησης ερωτημάτων με την χρήση όψεων, σε σχεσιακές και XML βάσεις δεδομένων. Ειδικότερα, επικεντρωνόμαστε στα ακόλουθα βασικά προβλήματα βελτιστοποίησης ερωτημάτων: την περιεκτικότητα ερωτημάτων, την αναδιατύπωση ερωτημάτων και την επιλογή όψεων. Στις σχεσιακές βάσεις δεδομένων, επικεντρωνόμαστε στα συζευκτικά ερωτήματα (εν συντομία CQs), που αντιστοιχούν σε SQL ερωτήματα με χρήση των τελεστών select, project και join. Επίσης, χρησιμοποιούμε σημασιολογίες πολυσυνόλου (οι βασικές σχέσεις και οι απαντήσεις των ερωτημάτων είναι πολυσύνολα) και συνόλου-πολυσυνόλου (οι βασικές σχέσεις είναι σύνολα, ενώ οι απαντήσεις είναι πολυσύνολα) για να περιγράψουμε, θεωρητικά, την σημασιολογία της SQL. Για ερωτήματα σε XML δεδομένα χρησιμοποιούμε την γλώσσα XPath, και ειδικότερα επικεντρωνόμαστε στις τρεις βασικές υποκλάσεις της γλώσσας, που σχηματίζεται από την χρήση δύο από τα τρία βασικά συστατικά: wildcard ετικέτες (*), ακμές απογόνου (//) και κλαδιά ([ ]). Στο πλαίσιο της περιεκτικότητας ερωτημάτων μελετάμε το πρόβλημα, καθώς και την πολυπλοκότητα του, για βασικές υποκλάσεις των CQs. Για την γενική κλάση των CQs το πρόβλημα παραμένει ανοικτό εδώ και μια δεκαετία. Επιπλέον, μελετάμε τα προβλήματα περιεκτικότητας και ισοδυναμίας για ενώσεις XPath ερωτημάτων. Για την αναδιατύπωση CQ ερωτημάτων, περιγράφουμε βασικές συνθήκες που πρέπει να πληρούν οι όψεις έτσι ώστε να υπάρχει μία ισοδύναμη αναδιατύπωση. Για τα XPath ερωτήματα που σχηματίζονται από // και *, δείχνουμε ότι η χρήση του τελεστή ένωσης απαιτείται για την εύρεση ισοδύναμης αναδιατύπωσης. Το πρόβλημα επιλογής όψεων μελετάται για CQ ερωτήματα, όπου επικεντρωνόμαστε στον περιορισμό του χώρου αναζήτησης βέλτιστων λύσεων. Ειδικότερα, δείχνομαι ότι εάν η επιλογή του συνόλου όψεων γίνεται βάσει συγκεκριμένων συνθηκών (ως προς την μορφή των όψεων), τότε εξασφαλίζεται η εύρεση τουλάχιστον μίας βέλτιστης λύσης για το πρόβλημα. Έπειτα, επικεντρωνόμενοι σε υποκλάσεις των CQ ερωτημάτων, δείχνουμε ότι για ένα σύνολο ερωτημάτων αλυσίδας, και για τις δύο σημασιολογίες, όψεις που ορίζονται, και αυτές, από ερωτήματα αλυσίδας δεν επαρκούν, πάντα, για την εύρεση βέλτιστης λύσης. Στην περίπτωση, όμως, των ερωτημάτων μονοπατιού, και θεωρώντας σημασιολογία πολυσυνόλου, δείχνουμε ότι οι όψεις που ορίζονται από ερωτήματα μονοπατιού μας εξασφαλίζουν την εύρεση τουλάχιστον μίας βέλτιστης λύσης για το πρόβλημα επιλογής όψεων.In this thesis, we investigate techniques for query optimization using a set of views, considering both relational and XML databases. In particular, we focus on three fundamental problems of query optimization; which are the query containment, the query rewriting and the view selection. For relational databases we focus on the class of select-project-join SQL queries with equality comparisons, a.k.a. conjunctive queries (CQs for short). We consider two kinds of semantics to theoretically approximate the SQL semantics: the bag (multiple occurrences of the same tuple are allowed in both base relations and answers of queries) and bag-set semantics (the base relations are sets and the operators are liable for bag-results). For XML databases, we focus on XPath. Especially, we focus on the major fragments of XPath which contain two of the constructs: wildcard, descendant edge and branches. Query containment under both bag and bag-set semantics is investigated through a detailed analysis of special cases of CQs. The complexity in each case is given, as well. For the general case, the problem remains open for more than a decade. Moreover, we give necessary and sufficient conditions for deciding both containment and equivalence for unions of XPath queries; a problem which was not investigated in depth, in the past. The problem of finding an equivalent rewriting is also investigated for both relational and XPath queries. In particular, for relational queries, we describe the requirements that a set of views have to satisfy in order to give an equivalent rewriting of a CQ under both bag and bag-set semantics. In the case of XML databases, we investigate the problem of rewriting an XPath query using multiple views, and prove that in the case that the query contains both descendant edges and wildcards, the union operator may be required for finding an equivalent rewriting. The view selection is investigated for workloads of CQs under both bag and bag-set semantics. Especially, we aim to limit the search space of candidate viewsets. We start with the general case, where we give a tight condition that candidate views can satisfy and still the search space does contain at least one optimal solution. Then we study special cases. We show that for chain query workloads under both bag and bag-set semantics, taking only chain views may miss optimal solution, whereas, if we further limit the queries to be path queries, then under bag semantics, path views suffice.Ματθαίος Γ. Δαμίγο

    Distributed XML Query Processing

    Get PDF
    While centralized query processing over collections of XML data stored at a single site is a well understood problem, centralized query evaluation techniques are inherently limited in their scalability when presented with large collections (or a single, large document) and heavy query workloads. In the context of relational query processing, similar scalability challenges have been overcome by partitioning data collections, distributing them across the sites of a distributed system, and then evaluating queries in a distributed fashion, usually in a way that ensures locality between (sub-)queries and their relevant data. This thesis presents a suite of query evaluation techniques for XML data that follow a similar approach to address the scalability problems encountered by XML query evaluation. Due to the significant differences in data and query models between relational and XML query processing, it is not possible to directly apply distributed query evaluation techniques designed for relational data to the XML scenario. Instead, new distributed query evaluation techniques need to be developed. Thus, in this thesis, an end-to-end solution to the scalability problems encountered by XML query processing is proposed. Based on a data partitioning model that supports both horizontal and vertical fragmentation steps (or any combination of the two), XML collections are fragmented and distributed across the sites of a distributed system. Then, a suite of distributed query evaluation strategies is proposed. These query evaluation techniques ensure locality between each fragment of the collection and the parts of the query corresponding to the data in this fragment. Special attention is paid to scalability and query performance, which is achieved by ensuring a high degree of parallelism during distributed query evaluation and by avoiding access to irrelevant portions of the data. For maximum flexibility, the suite of distributed query evaluation techniques proposed in this thesis provides several alternative approaches for evaluating a given query over a given distributed collection. Thus, to achieve the best performance, it is necessary to predict and compare the expected performance of each of these alternatives. In this work, this is accomplished through a query optimization technique based on a distribution-aware cost model. The same cost model is also used to fine-tune the way a collection is fragmented to the demands of the query workload evaluated over this collection. To evaluate the performance impact of the distributed query evaluation techniques proposed in this thesis, the techniques were implemented within a production-quality XML database system. Based on this implementation, a thorough experimental evaluation was performed. The results of this evaluation confirm that the distributed query evaluation techniques introduced here lead to significant improvements in query performance and scalability both when compared to centralized techniques and when compared to existing distributed query evaluation techniques

    Semantics and efficient evaluation of partial tree-pattern queries on XML

    Get PDF
    Current applications export and exchange XML data on the web. Usually, XML data are queried using keyword queries or using the standard structured query language XQuery the core of which consists of the navigational query language XPath. In this context, one major challenge is the querying of the data when the structure of the data sources is complex or not fully known to the user. Another challenge is the integration of multiple data sources that export data with structural differences and irregularities. In this dissertation, a query language for XML called Partial Tree-Pattern Query (PTPQ) language is considered. PTPQs generalize and strictly contain Tree-Pattern Queries (TPQs) and can express a broad structural fragment of XPath. Because of their expressive power and flexibility, they are useful for querying XML documents the structure of which is complex or not fully known to the user, and for integrating XML data sources with different structures. The dissertation focuses on three issues. The first one is the design of efficient non-main-memory evaluation methods for PTPQs. The second one is the assignment of semantics to PTPQs so that they return meaningful answers. The third one is the development of techniques for answering TPQs using materialized views. Non-main-memory XML query evaluation can be done in two modes (which also define two evaluation models). In the first mode, data is preprocessed and indexes, called inverted lists, are built for it. In the second mode, data are unindexed and arrives continuously in the form of a stream. Existing algorithms cannot be used directly or indirectly to efficiently compute PTPQs in either mode. Initially, the problem of efficiently evaluating partial path queries in the inverted lists model has been addressed. Partial path queries form a subclass of PTPQs which is not contained in the class of TPQs. Three novel algorithms for evaluating partial path queries including a holistic one have been designed. The analytical and experimental results show that the holistic algorithm outperforms the other two. These results have been extended into holistic and non-holistic approaches for PTPQs in the inverted lists model. The experiments show again the superiority of the holistic approach. The dissertation has also addressed the problem of evaluating PTPQs in the streaming model, and two original efficient streaming algorithms for PTPQs have been designed. Compared to the only known streaming algorithm that supports an extension of TPQs, the experimental results show that the proposed algorithms perform better by orders of magnitude while consuming a much smaller fraction of memory space. An original approach for assigning semantics to PTPQs has also been devised. The novel semantics seamlessly applies to keyword queries and to queries with structural restrictions. In contrast to previous approaches that operate locally on data, the proposed approach operates globally on structural summaries of data to extract tree patterns. Compared to previous approaches, an experimental evaluation shows that our approach has a perfect recall both for XML documents with complete and with incomplete data. It also shows better precision compared to approaches with similar recall. Finally, the dissertation has addressed the problem of answering XML queries using exclusively materialized views. An original approach for materializing views in the context of the inverted lists model has been suggested. Necessary and sufficient conditions have been provided for tree-pattern query answerability in terms of view-to-query homomorphisms. A time and space efficient algorithm was designed for deciding query answerability and a technique for computing queries over view materializations using stack- based holistic algorithms was developed. Further, optimizations were developed which (a) minimize the storage space and avoid redundancy by materializing views as bitmaps, and (b) optimize the evaluation of the queries over the views by applying bitwise operations on view materializations. The experimental results show that the proposed approach obtains largely higher hit rates than previous approaches, speeds up significantly the evaluation of queries without using views, and scales very smoothly in terms of storage space and computational overhead
    corecore