102 research outputs found
The {RDF}-3X Engine for Scalable Management of {RDF} Data
RDF is a data model for schema-free structured information that is gaining momentum in the context of Semantic-Web data, life sciences, and also Web 2.0 platforms. The ``pay-as-you-go'' nature of RDF and the flexible pattern-matching capabilities of its query language SPARQL entail efficiency and scalability challenges for complex queries including long join paths. This paper presents the RDF-3X engine, an implementation of SPARQL that achieves excellent performance by pursuing a RISC-style architecture with streamlined indexing and query processing. The physical design is identical for all RDF-3X databases regardless of their workloads, and completely eliminates the need for index tuning by exhaustive indexes for all permutations of subject-property-object triples and their binary and unary projections. These indexes are highly compressed, and the query processor can aggressively leverage fast merge joins with excellent performance of processor caches. The query optimizer is able to choose optimal join orders even for complex queries, with a cost model that includes statistical synopses for entire join paths. Although RDF-3X is optimized for queries, it also provides good support for efficient online updates by means of a staging architecture: direct updates to the main database indexes are deferred, and instead applied to compact differential indexes which are later merged into the main indexes in a batched manner. Experimental studies with several large-scale datasets with more than 50 million RDF triples and benchmark queries that include pattern matching, manyway star-joins, and long path-joins demonstrate that RDF-3X can outperform the previously best alternatives by one or two orders of magnitude
Optimal column layout for hybrid workloads
Data-intensive analytical applications need to support both efficient reads and writes. However, what is usually a good data layout for an update-heavy workload, is not well-suited for a read-mostly one and vice versa. Modern analytical data systems rely on columnar layouts and employ delta stores to inject new data and updates. We show that for hybrid workloads we can achieve close to one order of magnitude better performance by tailoring the column layout design to the data and query workload. Our approach navigates the possible design space of the physical layout: it organizes each column’s data by determining the number of partitions, their corresponding sizes and ranges, and the amount of buffer space and how it is allocated. We frame these design decisions as an optimization problem that, given workload knowledge and performance requirements, provides an optimal physical layout for the workload at hand. To evaluate this work, we build an in-memory storage engine, Casper, and we show that it outperforms state-of-the-art data layouts of analytical systems for hybrid workloads. Casper delivers up to 2.32x higher throughput for update-intensive workloads and up to 2.14x higher throughput for hybrid workloads. We further show how to make data layout decisions robust to workload variation by carefully selecting the input of the optimization.http://www.vldb.org/pvldb/vol12/p2393-athanassoulis.pdfPublished versionPublished versio
Clustering-Initialized Adaptive Histograms and Probabilistic Cost Estimation for Query Optimization
An assumption with self-tuning histograms has been that they can "learn" the dataset if given enough training queries. We show that this is not the case with the current approaches. The quality of the histogram depends on the initial configuration. Starting with few good buckets can improve the efficiency of learning. Without this, the histogram is likely to stagnate, i.e. converge to a bad configuration and stop learning. We also present a probabilistic cost estimation model
Recommended from our members
Analytical Query Execution Optimized for all Layers of Modern Hardware
Analytical database queries are at the core of business intelligence and decision support. To analyze the vast amounts of data available today, query execution needs to be orders of magnitude faster. Hardware advances have made a profound impact on database design and implementation. The large main memory capacity allows queries to execute exclusively in memory and shifts the bottleneck from disk access to memory bandwidth. In the new setting, to optimize query performance, databases must be aware of an unprecedented multitude of complicated hardware features. This thesis focuses on the design and implementation of highly efficient database systems by optimizing analytical query execution for all layers of modern hardware. The hardware layers include the network across multiple machines, main memory and the NUMA interconnection across multiple processors, the multiple levels of caches across multiple processor cores, and the execution pipeline within each core. For the network layer, we introduce a distributed join algorithm that minimizes the network traffic. For the memory hierarchy, we describe partitioning variants aware to the dynamics of the CPU caches and the NUMA interconnection. To improve the memory access rate of linear scans, we optimize lightweight compression variants and evaluate their trade-offs. To accelerate query execution within the core pipeline, we introduce advanced SIMD vectorization techniques generalizable across multiple operators. We evaluate our algorithms and techniques on both mainstream hardware and on many-integrated-core platforms, and combine our techniques in a new query engine design that can better utilize the features of many-core CPUs. In the era of hardware becoming increasingly parallel and datasets consistently growing in size, this thesis can serve as a compass for developing hardware-conscious databases with truly high-performance analytical query execution
FactorJoin: A New Cardinality Estimation Framework for Join Queries
Cardinality estimation is one of the most fundamental and challenging
problems in query optimization. Neither classical nor learning-based methods
yield satisfactory performance when estimating the cardinality of the join
queries. They either rely on simplified assumptions leading to ineffective
cardinality estimates or build large models to understand the data
distributions, leading to long planning times and a lack of generalizability
across queries.
In this paper, we propose a new framework FactorJoin for estimating join
queries. FactorJoin combines the idea behind the classical join-histogram
method to efficiently handle joins with the learning-based methods to
accurately capture attribute correlation. Specifically, FactorJoin scans every
table in a DB and builds single-table conditional distributions during an
offline preparation phase. When a join query comes, FactorJoin translates it
into a factor graph model over the learned distributions to effectively and
efficiently estimate its cardinality.
Unlike existing learning-based methods, FactorJoin does not need to
de-normalize joins upfront or require executed query workloads to train the
model. Since it only relies on single-table statistics, FactorJoin has small
space overhead and is extremely easy to train and maintain. In our evaluation,
FactorJoin can produce more effective estimates than the previous
state-of-the-art learning-based methods, with 40x less estimation latency, 100x
smaller model size, and 100x faster training speed at comparable or better
accuracy. In addition, FactorJoin can estimate 10,000 sub-plan queries within
one second to optimize the query plan, which is very close to the traditional
cardinality estimators in commercial DBMS.Comment: Paper accepted by SIGMOD 202
Optimal column layout for hybrid workloads (VLDB 2020 talk)
Data-intensive analytical applications need to support both efficient
reads and writes. However, what is usually a good data layout for
an update-heavy workload, is not well-suited for a read-mostly one
and vice versa. Modern analytical data systems rely on columnar
layouts and employ delta stores to inject new data and updates.
We show that for hybrid workloads we can achieve close to one
order of magnitude better performance by tailoring the column layout
design to the data and query workload. Our approach navigates
the possible design space of the physical layout: it organizes each
column’s data by determining the number of partitions, their corresponding
sizes and ranges, and the amount of buffer space and how
it is allocated. We frame these design decisions as an optimization
problem that, given workload knowledge and performance requirements,
provides an optimal physical layout for the workload
at hand. To evaluate this work, we build an in-memory storage engine,
Casper, and we show that it outperforms state-of-the-art data
layouts of analytical systems for hybrid workloads. Casper delivers
up to 2:32 higher throughput for update-intensive workloads
and up to 2:14 higher throughput for hybrid workloads. We further
show how to make data layout decisions robust to workload
variation by carefully selecting the input of the optimization.http://www.vldb.org/pvldb/vol12/p2393-athanassoulis.pdfPublished versio
Ore estimation and selection of underground mining methods for some copper deposits
Imperial Users onl
Recommended from our members
Compiling Communication-Minimizing Query Plans
Because of the low arithmetic intensity of relational database operators, the performance of in-memory column stores ought to be bound by main-memory bandwidth, and in practice, highly-optimized operator implementations already achieve close to their peak theoretical performance. By itself, this would imply that hardware acceleration for analytics would be of limited utility, but I show that the emergence of full-query compilation presents new opportunities to reduce memory traffic and trade computation for communication, meaning that database-oriented processors may yet be worth designing.Moreover, the communication costs of queries on a given processor and memory hierarchy are determined by factors below the level of abstraction expressed in traditional query plans, such as how operators are (or are not) fused together, how execution is parallelized and cache-blocked, and how intermediate results are arranged in memory. I present a Scala- embedded programming language called Ressort that exposes these machine-level aspects of query compilation, and which emits parallel C++/OpenMP code as its target to express a greater range of algorithmic variants for each query than would be easy to study by hand
- …