37,642 research outputs found
Hardware-conscious Query Processing in GPU-accelerated Analytical Engines
In order to improve their power efficiency and computational capacity, modern servers are adopting hardware accelerators, especially GPUs. Modern analytical DMBS engines have been highly optimized for multi-core multi-CPU query execution, but lack the necessary abstractions to support concurrent hardware-conscious query execution over multiple heterogeneous devices and, thus, are unable to take full advantage of the available accelerators. In this work, we present a Heterogeneity-conscious Analytical query Processing Engine (HAPE), a hardware-conscious analytical engines that targets efficient concurrent multi-CPU multi-GPU query execution. HAPE decomposes heterogeneous query execution into i) efficient single-device and ii) concurrent multi-device query execution. It uses hardware-conscious algorithms designed for single-device execution and combines them into efficient intra-device hardware-conscious execution modules, via code generation. HAPE combines these modules to achieve concurrent multi-device execution by handling data and control transfers. We validate our design by building a prototype and evaluate its performance on a co-processing radix-join and TPC-H queries. We show that it achieves up to 10x and 3.5x speed-up on the join against CPU and GPU alternatives and 1.6x-8x against state-of-the-art CPU- and GPU-based commercial DBMS on the queries
A database accelerator for energy-efficient query processing and optimization
Data processing on a continuously growing amount of information and the increasing power restrictions have become an ubiquitous challenge in our world today. Besides parallel computing, a promising approach to improve the energy efficiency of current systems is to integrate specialized hardware. This paper presents a Tensilica RISC processor extended with an instruction set to accelerate basic database operators frequently used in modern database systems. The core was taped out in a 28 nm SLP CMOS technology and allows energy-efficient query processing as well as query optimization by applying selectivity estimation techniques. Our chip measurements show an 1000x energy improvement on selected database operators compared to state-of-the-art systems
Make the most out of your SIMD investments: counter control flow divergence in compiled query pipelines
Increasing single instruction multiple data (SIMD) capabilities in modern hardware allows for the compilation of data-parallel query pipelines. This means GPU-alike challenges arise: control flow divergence causes the underutilization of vector-processing units. In this paper, we present efficient algorithms for the AVX-512 architecture to address this issue. These algorithms allow for the fine-grained assignment of new tuples to idle SIMD lanes. Furthermore, we present strategies for their integration with compiled query pipelines so that tuples are never evicted from registers. We evaluate our approach with three query types: (i) a table scan query based on TPC-H Query 1, that performs up to 34% faster when addressing underutilization, (ii) a hashjoin query, where we observe up to 25% higher performance, and (iii) an approximate geospatial join query, which shows performance improvements of up to 30%
Case for holistic query evaluation
In this thesis we present the holistic query evaluation model. We propose a novel
query engine design that exploits the characteristics of modern processors when queries
execute inside main memory. The holistic model (a) is based on template-based code
generation for each executed query, (b) uses multithreading to adapt to multicore processor
architectures and (c) addresses the optimization problem of scheduling multiple
threads for intra-query parallelism.
Main-memory query execution is a usual operation in modern database servers
equipped with tens or hundreds of gigabytes of RAM. In such an execution environment,
the query engine needs to adapt to the CPU characteristics to boost performance.
For this purpose, holistic query evaluation applies customized code generation
to database query evaluation. The idea is to use a collection of highly efficient code
templates and dynamically instantiate them to create query- and hardware-specific
source code. The source code is compiled and dynamically linked to the database
server for processing. Code generation diminishes the bloat of higher-level programming
abstractions necessary for implementing generic, interpreted, SQL query engines.
At the same time, the generated code is customized for the hardware it will run on. The
holistic model supports the most frequently used query processing algorithms, namely
sorting, partitioning, join evaluation, and aggregation, thus allowing the efficient evaluation
of complex DSS or OLAP queries.
Modern CPUs follow multicore designs with multiple threads running in parallel.
The dataflow of query engine algorithms needs to be adapted to exploit such designs.
We identify memory accesses and thread synchronization as the main bottlenecks in
a multicore execution environment. We extend the holistic query evaluation model
and propose techniques to mitigate the impact of these bottlenecks on multithreaded
query evaluation. We analytically model the expected performance and scalability of
the proposed algorithms according to the hardware specifications. The analytical performance
expressions can be used by the optimizer to statically estimate the speedup
of multithreaded query execution.
Finally, we examine the problem of thread scheduling in the context of multithreaded
query evaluation on multicore CPUs. The search space for possible operator
execution schedules scales fast, thus forbidding the use of exhaustive techniques. We
model intra-query parallelism on multicore systems and present scheduling heuristics
that result in different degrees of schedule quality and optimization cost. We identify
cases where each of our proposed algorithms, or combinations of them, are expected
to generate schedules of high quality at an acceptable running cost
Physical Representation-based Predicate Optimization for a Visual Analytics Database
Querying the content of images, video, and other non-textual data sources
requires expensive content extraction methods. Modern extraction techniques are
based on deep convolutional neural networks (CNNs) and can classify objects
within images with astounding accuracy. Unfortunately, these methods are slow:
processing a single image can take about 10 milliseconds on modern GPU-based
hardware. As massive video libraries become ubiquitous, running a content-based
query over millions of video frames is prohibitive.
One promising approach to reduce the runtime cost of queries of visual
content is to use a hierarchical model, such as a cascade, where simple cases
are handled by an inexpensive classifier. Prior work has sought to design
cascades that optimize the computational cost of inference by, for example,
using smaller CNNs. However, we observe that there are critical factors besides
the inference time that dramatically impact the overall query time. Notably, by
treating the physical representation of the input image as part of our query
optimization---that is, by including image transforms, such as resolution
scaling or color-depth reduction, within the cascade---we can optimize data
handling costs and enable drastically more efficient classifier cascades.
In this paper, we propose Tahoma, which generates and evaluates many
potential classifier cascades that jointly optimize the CNN architecture and
input data representation. Our experiments on a subset of ImageNet show that
Tahoma's input transformations speed up cascades by up to 35 times. We also
find up to a 98x speedup over the ResNet50 classifier with no loss in accuracy,
and a 280x speedup if some accuracy is sacrificed.Comment: Camera-ready version of the paper submitted to ICDE 2019, In
Proceedings of the 35th IEEE International Conference on Data Engineering
(ICDE 2019
Forecasting the cost of processing multi-join queries via hashing for main-memory databases (Extended version)
Database management systems (DBMSs) carefully optimize complex multi-join
queries to avoid expensive disk I/O. As servers today feature tens or hundreds
of gigabytes of RAM, a significant fraction of many analytic databases becomes
memory-resident. Even after careful tuning for an in-memory environment, a
linear disk I/O model such as the one implemented in PostgreSQL may make query
response time predictions that are up to 2X slower than the optimal multi-join
query plan over memory-resident data. This paper introduces a memory I/O cost
model to identify good evaluation strategies for complex query plans with
multiple hash-based equi-joins over memory-resident data. The proposed cost
model is carefully validated for accuracy using three different systems,
including an Amazon EC2 instance, to control for hardware-specific differences.
Prior work in parallel query evaluation has advocated right-deep and bushy
trees for multi-join queries due to their greater parallelization and
pipelining potential. A surprising finding is that the conventional wisdom from
shared-nothing disk-based systems does not directly apply to the modern
shared-everything memory hierarchy. As corroborated by our model, the
performance gap between the optimal left-deep and right-deep query plan can
grow to about 10X as the number of joins in the query increases.Comment: 15 pages, 8 figures, extended version of the paper to appear in
SoCC'1
- âŠ