53 research outputs found

    Query estimation techniques in database systems

    Get PDF
    The effctiveness of query optimization in database systems critically depends on the system\u27;s ability to assess the execution costs of different query execution plans. For this purpose, the sizes and data distributions of the intermediate results generated during plan execution need to be estimated as accurately as possible. This estimation requires the maintenance of statistics on the data stored in the database, which are referred to as data synopses. While the problem of query cost estimation has received significant attention for over a decade, it has remained an open issue in practice, because most previous techniques have focused on singular aspects of the problem such as minimizing the estimation error of a single type of query and a single data distribution, whereas database management systems generally need to support a wide range of queries over a number of datasets. In this thesis I introduce a new technique for query result estimation, which extends existing techniques in that it offers estimation for all combinations of the three major database operators selection, projection, and join. The approach is based on separate and independent approximations of the attribute values contained in a dataset and their frequencies. Through the use of space-filling curves, the approach extends to multi-dimensional data, while maintaining its accuracy and computational properties. The resulting estimation accuracy is competitive with specialized techniques and superior to the histogram techniques currently implemented in commercial database management systems. Because data synopses reside in main memory, they compete for available space with the database cache and query execution buffers. Consequently, the memory available to data synopses needs to be used efficiently. This results in a physical design problem for data synopses, which is to determine the best set of synopses for a given combination of datasets, queries, and available memory. This thesis introduces a formalization of the problem, and efficient algorithmic solutions. All discussed techniques are evaluated with regard to their overhead and resulting estimation accuracy on a variety of synthetic and real-life datasets.Die EffektivitĂ€t der Anfrage-Optimierung in Datenbanksystemen hĂ€ngt entscheidend von der FĂ€higkeit des Systems ab, die Kosten der verschiedenen Möglichkeiten, eine Anfrage auszufĂŒhren, abzuschĂ€tzen. Zu diesem Zweck ist es nötig, die GrĂ¶ĂŸen und Datenverteilungen der Zwischenresultate, die wĂ€hrend der AusfĂŒhrung einer Anfrage generiert werden, so genau wie möglich zu schĂ€tzen. Zur Lösung dieses SchĂ€tzproblems benötigt man Statistiken ĂŒber die Daten, welche in dem Datenbanksystem gespeichert werden; diese Statistiken werden auch als Daten Synopsen bezeichnet. Obwohl das Problem der SchĂ€tzung von Anfragekosten innerhalb der letzten 10 Jahre intensiv untersucht wurde, gilt es weiterhin als offen, da viele der vorgeschlagenen AnsĂ€tze nur einen Teilaspekt des Problems betrachten. In den meisten FĂ€llen wurden Techniken fĂŒr das AbschĂ€tzen eines einzelnen Operators auf einer einzelnen Datenverteilung untersucht, wohingegen Datenbanksysteme in der Praxis eine Vielfalt von Anfragen ĂŒber diverse DatensĂ€tze unterstĂŒtzen mĂŒssen. Aus diesem Grund stellt diese Arbeit einen neuen Ansatz zur ResultatsabschĂ€tzung vor, welcher insofern ĂŒber bestehende AnsĂ€tze hinausgeht, als dass er akkurate AbschĂ€tzung beliebiger Kombinationen der drei wichtigsten Datenbank-Operatoren erlaubt: Selektion, Projektion und Join. Meine Technik basiert auf separaten und unabhĂ€ngigen Approximationen der Verteilung der Attributwerte eines Datensatzes und der Verteilung der HĂ€ufigkeiten dieser Attributwerte. Durch den Einsatz raumfĂŒllender Kurven können diese Approximationstechniken zudem auf mehrdimensionale Datenverteilungen angewandt werden, ohne ihre Genauigkeit und geringen Berechnungskosten einzubĂŒĂŸen. Die resultierende SchĂ€tzgenauigkeit ist vergleichbar mit der von auf einen einzigen Operator spezialisierten Techniken, und deutlich höher als die der auf Histogrammen basierenden AnsĂ€tze, welche momentan in kommerziellen Datenbanksystemen eingesetzt werden. Da Daten Synopsen im Arbeitsspeicher residieren, reduzieren sie den Speicher, der fĂŒr den Seitencache oder AusfĂŒhrungspuffer zur VerfĂŒgung steht. Somit sollte der fĂŒr Synopsen reservierte Speicher effizient genutzt werden, bzw. möglichst klein sein. Dies fĂŒhrt zu dem Problem, die optimale Kombination von Synopsen fĂŒr eine gegebene Kombination an Daten, Anfragen und verfĂŒgbarem Speicher zu bestimmen. Diese Arbeit stellt eine formale Beschreibung des Problems, sowie effiziente Algorithmen zu dessen Lösung vor. Alle beschriebenen Techniken werden in Hinsicht auf ihren Aufwand und die resultierende SchĂ€tzgenauigkeit mittels Experimenten ĂŒber eine Vielzahl von Datenverteilungen evaluiert

    Query estimation techniques in database systems

    Get PDF
    The effctiveness of query optimization in database systems critically depends on the system';s ability to assess the execution costs of different query execution plans. For this purpose, the sizes and data distributions of the intermediate results generated during plan execution need to be estimated as accurately as possible. This estimation requires the maintenance of statistics on the data stored in the database, which are referred to as data synopses. While the problem of query cost estimation has received significant attention for over a decade, it has remained an open issue in practice, because most previous techniques have focused on singular aspects of the problem such as minimizing the estimation error of a single type of query and a single data distribution, whereas database management systems generally need to support a wide range of queries over a number of datasets. In this thesis I introduce a new technique for query result estimation, which extends existing techniques in that it offers estimation for all combinations of the three major database operators selection, projection, and join. The approach is based on separate and independent approximations of the attribute values contained in a dataset and their frequencies. Through the use of space-filling curves, the approach extends to multi-dimensional data, while maintaining its accuracy and computational properties. The resulting estimation accuracy is competitive with specialized techniques and superior to the histogram techniques currently implemented in commercial database management systems. Because data synopses reside in main memory, they compete for available space with the database cache and query execution buffers. Consequently, the memory available to data synopses needs to be used efficiently. This results in a physical design problem for data synopses, which is to determine the best set of synopses for a given combination of datasets, queries, and available memory. This thesis introduces a formalization of the problem, and efficient algorithmic solutions. All discussed techniques are evaluated with regard to their overhead and resulting estimation accuracy on a variety of synthetic and real-life datasets.Die EffektivitĂ€t der Anfrage-Optimierung in Datenbanksystemen hĂ€ngt entscheidend von der FĂ€higkeit des Systems ab, die Kosten der verschiedenen Möglichkeiten, eine Anfrage auszufĂŒhren, abzuschĂ€tzen. Zu diesem Zweck ist es nötig, die GrĂ¶ĂŸen und Datenverteilungen der Zwischenresultate, die wĂ€hrend der AusfĂŒhrung einer Anfrage generiert werden, so genau wie möglich zu schĂ€tzen. Zur Lösung dieses SchĂ€tzproblems benötigt man Statistiken ĂŒber die Daten, welche in dem Datenbanksystem gespeichert werden; diese Statistiken werden auch als Daten Synopsen bezeichnet. Obwohl das Problem der SchĂ€tzung von Anfragekosten innerhalb der letzten 10 Jahre intensiv untersucht wurde, gilt es weiterhin als offen, da viele der vorgeschlagenen AnsĂ€tze nur einen Teilaspekt des Problems betrachten. In den meisten FĂ€llen wurden Techniken fĂŒr das AbschĂ€tzen eines einzelnen Operators auf einer einzelnen Datenverteilung untersucht, wohingegen Datenbanksysteme in der Praxis eine Vielfalt von Anfragen ĂŒber diverse DatensĂ€tze unterstĂŒtzen mĂŒssen. Aus diesem Grund stellt diese Arbeit einen neuen Ansatz zur ResultatsabschĂ€tzung vor, welcher insofern ĂŒber bestehende AnsĂ€tze hinausgeht, als dass er akkurate AbschĂ€tzung beliebiger Kombinationen der drei wichtigsten Datenbank-Operatoren erlaubt: Selektion, Projektion und Join. Meine Technik basiert auf separaten und unabhĂ€ngigen Approximationen der Verteilung der Attributwerte eines Datensatzes und der Verteilung der HĂ€ufigkeiten dieser Attributwerte. Durch den Einsatz raumfĂŒllender Kurven können diese Approximationstechniken zudem auf mehrdimensionale Datenverteilungen angewandt werden, ohne ihre Genauigkeit und geringen Berechnungskosten einzubĂŒĂŸen. Die resultierende SchĂ€tzgenauigkeit ist vergleichbar mit der von auf einen einzigen Operator spezialisierten Techniken, und deutlich höher als die der auf Histogrammen basierenden AnsĂ€tze, welche momentan in kommerziellen Datenbanksystemen eingesetzt werden. Da Daten Synopsen im Arbeitsspeicher residieren, reduzieren sie den Speicher, der fĂŒr den Seitencache oder AusfĂŒhrungspuffer zur VerfĂŒgung steht. Somit sollte der fĂŒr Synopsen reservierte Speicher effizient genutzt werden, bzw. möglichst klein sein. Dies fĂŒhrt zu dem Problem, die optimale Kombination von Synopsen fĂŒr eine gegebene Kombination an Daten, Anfragen und verfĂŒgbarem Speicher zu bestimmen. Diese Arbeit stellt eine formale Beschreibung des Problems, sowie effiziente Algorithmen zu dessen Lösung vor. Alle beschriebenen Techniken werden in Hinsicht auf ihren Aufwand und die resultierende SchĂ€tzgenauigkeit mittels Experimenten ĂŒber eine Vielzahl von Datenverteilungen evaluiert

    PolyFit: Polynomial-based Indexing Approach for Fast Approximate Range Aggregate Queries

    Get PDF
    Range aggregate queries find frequent application in data analytics. In some use cases, approximate results are preferred over accurate results if they can be computed rapidly and satisfy approximation guarantees. Inspired by a recent indexing approach, we provide means of representing a discrete point data set by continuous functions that can then serve as compact index structures. More specifically, we develop a polynomial-based indexing approach, called PolyFit, for processing approximate range aggregate queries. PolyFit is capable of supporting multiple types of range aggregate queries, including COUNT, SUM, MIN and MAX aggregates, with guaranteed absolute and relative error bounds. Experiment results show that PolyFit is faster and more accurate and compact than existing learned index structures.Comment: 13 page

    Topics in Massive Data Summarization.

    Full text link
    We consider three problems in this thesis. First, we want to construct a nearly workload-optimal histogram. Given B, we want to find the near optimal B bucket histogram under associated workload w within 1 + epsilon error tolerance. In the cash register model where data is streamed as a series of updates, we can build a histogram using polylogarithmic space, polylogarithmic time to process each item, and polylogarithmic post-processing time to build the histogram. All these results need the workload to be explicitly stored since we show that if the workload is summarized in small space lossily, algorithmic results such as above do not exist. Then, we consider the problem of private computation of approximate Heavy Hitters. Alice and Bob each hold a vector and, in the vector sum, they want to find the B largest values along with their indices. We show how to solve the problem privately with polylogarithmic communication, polynomial work and constantly many rounds in the sense that nothing is learned by Alice and Bob beyond what is implied by their input, the ideal top-B output, and goodness of approximation (equivalently,the Euclidean norm of the vector sum). We give lower bounds showing that the Euclidean norm must leak by any efficient algorithm. In the third problem, we want to build a near optimal histogram on probabilistic data streams. Given B, we want to find the near optimal B bucket histogram on probabilistic data streams under both L1 measurement and L2 measurement. We give deterministic algorithms without sampling. We can build histograms using poly-logarithmic space, polylogarithmic time to process each item, and polylogarithmic post-processing time to build the histogram. The result we give under L2 measurement is within 1 + epsilon error tolerance, and the result under L1 measurement is heuristic. We also give a direction to give guarantees to the heuristic.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/60841/1/xuanzh_1.pd

    Accurate Data Approximation in Constrained Environments

    Get PDF
    Several data reduction techniques have been proposed recently as methods for providing fast and fairly accurate answers to complex queries over large quantities of data. Their use has been widespread, due to the multiple benefits that they may offer in several constrained environments and applications. Compressed data representations require less space to store, less bandwidth to communicate and can provide, due to their size, very fast response times to queries. Sensor networks represent a typical constrained environment, due to the limited processing, storage and battery capabilities of the sensor nodes. Large-scale sensor networks require tight data handling and data dissemination techniques. Transmitting a full-resolution data feed from each sensor back to the base-station is often prohibitive due to (i) limited bandwidth that may not be sufficient to sustain a continuous feed from all sensors and (ii) increased power consumption due to the wireless multi-hop communication. In order to minimize the volume of the transmitted data, we can apply two well data reduction techniques: aggregation and approximation. In this dissertation we propose novel data reduction techniques for the transmission of measurements collected in sensor network environments. We first study the problem of summarizing multi-valued data feeds generated at a single sensor node, a step necessary for the transmission of large amounts of historical information collected at the node. The transmission of these measurements may either be periodic (i.e., when a certain amount of measurements has been collected), or in response to a query from the base station. We then also consider the approximate evaluation of aggregate continuous queries. A continuous query is a query that runs continuously until explicitly terminated by the user. These queries can be used to obtain a live-estimate of some (aggregated) quantity, such as the total number of moving objects detected by the sensors

    Efficiently Processing Complex Queries in Sensor Networks

    Get PDF

    An experimental study of learned cardinality estimation

    Get PDF
    Cardinality estimation is a fundamental but long unresolved problem in query optimization. Recently, multiple papers from different research groups consistently report that learned models have the potential to replace existing cardinality estimators. In this thesis, we ask a forward-thinking question: Are we ready to deploy these learned cardinality models in production? Our study consists of three main parts. Firstly, we focus on the static environment (i.e., no data updates) and compare five new learned methods with eight traditional methods on four real-world datasets under a unified workload setting. The results show that learned models are indeed more accurate than traditional methods, but they often suffer from high training and inference costs. Secondly, we explore whether these learned models are ready for dynamic environments (i.e., frequent data updates). We find that they can- not catch up with fast data updates and return large errors for different reasons. For less frequent updates, they can perform better but there is no clear winner among themselves. Thirdly, we take a deeper look into learned models and explore when they may go wrong. Our results show that the performance of learned methods can be greatly affected by the changes in correlation, skewness, or domain size. More importantly, their behaviors are much harder to interpret and often unpredictable. Based on these findings, we identify two promising research directions (control the cost of learned models and make learned models trustworthy) and suggest a number of research opportunities. We hope that our study can guide researchers and practitioners to work together to eventually push learned cardinality estimators into real database systems
    • 

    corecore