533 research outputs found
Statistical structures for internet-scale data management
Efficient query processing in traditional database management systems relies on statistics on base data. For centralized systems, there is a rich body of research results on such statistics, from simple aggregates to more elaborate synopses such as sketches and histograms. For Internet-scale distributed systems, on the other hand, statistics management still poses major challenges. With the work in this paper we aim to endow peer-to-peer data management over structured overlays with the power associated with such statistical information, with emphasis on meeting the scalability challenge. To this end, we first contribute efficient, accurate, and decentralized algorithms that can compute key aggregates such as Count, CountDistinct, Sum, and Average. We show how to construct several types of histograms, such as simple Equi-Width, Average-Shifted Equi-Width, and Equi-Depth histograms. We present a full-fledged open-source implementation of these tools for distributed statistical synopses, and report on a comprehensive experimental performance evaluation, evaluating our contributions in terms of efficiency, accuracy, and scalability
Distributed top-k aggregation queries at large
Top-k query processing is a fundamental building block for efficient ranking in a large number of applications. Efficiency is a central issue, especially for distributed settings, when the data is spread across different nodes in a network. This paper introduces novel optimization methods for top-k aggregation queries in such distributed environments. The optimizations can be applied to all algorithms that fall into the frameworks of the prior TPUT and KLEE methods. The optimizations address three degrees of freedom: 1) hierarchically grouping input lists into top-k operator trees and optimizing the tree structure, 2) computing data-adaptive scan depths for different input sources, and 3) data-adaptive sampling of a small subset of input sources in scenarios with hundreds or thousands of query-relevant network nodes. All optimizations are based on a statistical cost model that utilizes local synopses, e.g., in the form of histograms, efficiently computed convolutions, and estimators based on order statistics. The paper presents comprehensive experiments, with three different real-life datasets and using the ns-2 network simulator for a packet-level simulation of a large Internet-style network
ALECE: An Attention-based Learned Cardinality Estimator for SPJ Queries on Dynamic Workloads (Extended)
For efficient query processing, DBMS query optimizers have for decades relied
on delicate cardinality estimation methods. In this work, we propose an
Attention-based LEarned Cardinality Estimator (ALECE for short) for SPJ
queries. The core idea is to discover the implicit relationships between
queries and underlying dynamic data using attention mechanisms in ALECE's two
modules that are built on top of carefully designed featurizations for data and
queries. In particular, from all attributes in the database, the data-encoder
module obtains organic and learnable aggregations which implicitly represent
correlations among the attributes, whereas the query-analyzer module builds a
bridge between the query featurizations and the data aggregations to predict
the query's cardinality. We experimentally evaluate ALECE on multiple dynamic
workloads. The results show that ALECE enables PostgreSQL's optimizer to
achieve nearly optimal performance, clearly outperforming its built-in
cardinality estimator and other alternatives.Comment: VLDB 202
- …