60,808 research outputs found
Code Generation for Efficient Query Processing in Managed Runtimes
In this paper we examine opportunities arising from the conver-gence of two trends in data management: in-memory database sys-tems (IMDBs), which have received renewed attention following the availability of affordable, very large main memory systems; and language-integrated query, which transparently integrates database queries with programming languages (thus addressing the famous âimpedance mismatch â problem). Language-integrated query not only gives application developers a more convenient way to query external data sources like IMDBs, but also to use the same querying language to query an applicationâs in-memory collections. The lat-ter offers further transparency to developers as the query language and all data is represented in the data model of the host program-ming language. However, compared to IMDBs, this additional free-dom comes at a higher cost for query evaluation. Our vision is to improve in-memory query processing of application objects by introducing database technologies to managed runtimes. We focus on querying and we leverage query compilation to im-prove query processing on application objects. We explore dif-ferent query compilation strategies and study how they improve the performance of query processing over application data. We take C] as the host programming language as it supports language-integrated query through the LINQ framework. Our techniques de-liver significant performance improvements over the default LINQ implementation. Our work makes important first steps towards a future where data processing applications will commonly run on machines that can store their entire datasets in-memory, and will be written in a single programming language employing language-integrated query and IMDB-inspired runtimes to provide transparent and highly efficient querying. 1
Recommended from our members
Complex Query Operators on Modern Parallel Architectures
Identifying interesting objects from a large data collection is a fundamental problem for multi-criteria decision making applications.In Relational Database Management Systems (RDBMS), the most popular complex query operators used to solve this type of problem are the Top-K selection operator and the Skyline operator.Top-K selection is tasked with retrieving the k-highest ranking tuples from a given relation, as determined by a user-defined aggregation function.Skyline selection retrieves those tuples with attributes offering (pareto) optimal trade-offs in a given relation.Efficient Top-K query processing entails minimizing tuple evaluations by utilizing elaborate processing schemes combined with sophisticated data structures that enable early termination.Skyline query evaluation involves supporting processing strategies which are geared towards early termination and incomparable tuple pruning.The rapid increase in memory capacity and decreasing costs have been the main drivers behind the development of main-memory database systems.Although the act of migrating query processing in-memory has created many opportunities to improve the associated query latency, attaining such improvements has been very challenging due to the growing gap between processor and main memory speeds.Addressing this limitation has been made easier by the rapid proliferation of multi-core and many-core architectures.However, their utilization in real systems has been hindered by the lack of suitable parallel algorithms that focus on algorithmic efficiency.In this thesis, we study in depth the Top-K and Skyline selection operators, in the context of emerging parallel architectures.Our ultimate goal is to provide practical guidelines for developing work-efficient algorithms suitable for parallel main memory processing.We concentrate on multi-core (CPU), many-core (GPU), and processing-in-memory architectures (PIM), developing solutions optimized for high throughout and low latency.The first part of this thesis focuses on Top-K selection, presenting the specific details of early termination algorithms that we developed specifically for parallel architectures and various types of accelerators (i.e. GPU, PIM).The second part of this thesis, concentrates on Skyline selection and the development of a massively parallel load balanced algorithm for PIM architectures.Our work consolidates performance results across different parallel architectures using synthetic and real data on variable query parameters and distributions for both of the aforementioned problems.The experimental results demonstrate several orders of magnitude better throughput and query latency, thus validating the effectiveness of our proposed solutions for the Top-K and Skyline selection operators
Efficient query processing in managed runtimes
This thesis presents strategies to improve the query evaluation performance over
huge volumes of relational-like data that is stored in the memory space of managed
applications. Storing and processing application data in the memory space of managed
applications is motivated by the convergence of two recent trends in data management.
First, dropping DRAM prices have led to memory capacities that allow the entire working
set of an application to fit into main memory and to the emergence of in-memory
database systems (IMDBs). Second, language-integrated query transparently integrates
query processing syntax into programming languages and, therefore, allows complex
queries to be composed in the application. IMDBs typically serve as data stores to applications
written in an object-oriented language running on a managed runtime. In
this thesis, we propose a deeper integration of the two by storing all application data in
the memory space of the application and using language-integrated query, combined
with query compilation techniques, to provide fast query processing.
As a starting point, we look into storing data as runtime-managed objects in collection
types provided by the programming language. Queries are formulated using
language-integrated query and dynamically compiled to specialized functions that produce
the result of the query in a more efficient way by leveraging query compilation
techniques similar to those used in modern database systems. We show that the generated
query functions significantly improve query processing performance compared to
the default execution model for language-integrated query. However, we also identify
additional inefficiencies that can only be addressed by processing queries using low-level
techniques which cannot be applied to runtime-managed objects. To address this,
we introduce a staging phase in the generated code that makes query-relevant managed
data accessible to low-level query code. Our experiments in .NET show an improvement
in query evaluation performance of up to an order of magnitude over the default
language-integrated query implementation.
Motivated by additional inefficiencies caused by automatic garbage collection, we
introduce a new collection type, the black-box collection. Black-box collections integrate
the in-memory storage layer of a relational database system to store data and hide
the internal storage layout from the application by employing existing object-relational
mapping techniques (hence, the name black-box). Our experiments show that black-box
collections provide better query performance than runtime-managed collections
by allowing the generated query code to directly access the underlying relational in-memory
data store using low-level techniques. Black-box collections also outperform
a modern commercial database system. By removing huge volumes of collection data
from the managed heap, black-box collections further improve the overall performance
and response time of the application and improve the applicationâs scalability when
facing huge volumes of collection data.
To enable a deeper integration of the data store with the application, we introduce
self-managed collections. Self-managed collections are a new type of collection for
managed applications that, in contrast to black-box collections, store objects. As the
data elements stored in the collection are objects, they are directly accessible from the
application using references which allows for better integration of the data store with
the application. Self-managed collections manually manage the memory of objects
stored within them in a private heap that is excluded from garbage collection. We introduce
a special collection syntax and a novel type-safe manual memory management
system for this purpose. As was the case for black-box collections, self-managed collections
improve query performance by utilizing a database-inspired data layout and
allowing the use of low-level techniques. By also supporting references between collection
objects, they outperform black-box collections
KISS-Tree: Smart Latch-Free In-Memory Indexing on Modern Architectures
Growing main memory capacities and an increasing number of hardware threads in modern server systems led to fundamental changes in database architectures. Most importantly, query processing is nowadays performed on data that is often completely stored in main memory. Despite of a high main memory scan performance, index structures are still important components, but they have to be designed from scratch to cope with the specific characteristics of main memory and to exploit the high degree of parallelism. Current research mainly focused on adapting block-optimized B+-Trees, but these data structures were designed for secondary memory and involve comprehensive structural maintenance for updates.
In this paper, we present the KISS-Tree, a latch-free inmemory index that is optimized for a minimum number of memory accesses and a high number of concurrent updates. More specifically, we aim for the same performance as modern hash-based algorithms but keeping the order-preserving nature of trees. We achieve this by using a prefix tree that incorporates virtual memory management functionality and compression schemes. In our experiments, we evaluate the KISS-Tree on different workloads and hardware platforms and compare the results to existing in-memory indexes. The KISS-Tree offers the highest reported read performance on current architectures, a balanced read/write performance, and has a low memory footprint
Resiliency Mechanisms for In-Memory Column Stores
The key objective of database systems is to reliably manage data, while high query throughput and low query latency are core requirements. To date, database research activities mostly concentrated on the second part. However, due to the constant shrinking of transistor feature sizes, integrated circuits become more and more unreliable and transient hardware errors in the form of multi-bit flips become more and more prominent. In a more recent study (2013), in a large high-performance cluster with around 8500 nodes, a failure rate of 40 FIT per DRAM device was measured. For their system, this means that every 10 hours there occurs a single- or multi-bit flip, which is unacceptably high for enterprise and HPC scenarios. Causes can be cosmic rays, heat, or electrical crosstalk, with the latter being exploited actively through the RowHammer attack. It was shown that memory cells are more prone to bit flips than logic gates and several surveys found multi-bit flip events in main memory modules of today's data centers. Due to the shift towards in-memory data management systems, where all business related data and query intermediate results are kept solely in fast main memory, such systems are in great danger to deliver corrupt results to their users. Hardware techniques can not be scaled to compensate the exponentially increasing error rates. In other domains, there is an increasing interest in software-based solutions to this problem, but these proposed methods come along with huge runtime and/or storage overheads. These are unacceptable for in-memory data management systems.
In this thesis, we investigate how to integrate bit flip detection mechanisms into in-memory data management systems. To achieve this goal, we first build an understanding of bit flip detection techniques and select two error codes, AN codes and XOR checksums, suitable to the requirements of in-memory data management systems. The most important requirement is effectiveness of the codes to detect bit flips. We meet this goal through AN codes, which exhibit better and adaptable error detection capabilities than those found in today's hardware. The second most important goal is efficiency in terms of coding latency. We meet this by introducing a fundamental performance improvements to AN codes, and by vectorizing both chosen codes' operations. We integrate bit flip detection mechanisms into the lowest storage layer and the query processing layer in such a way that the remaining data management system and the user can stay oblivious of any error detection. This includes both base columns and pointer-heavy index structures such as the ubiquitous B-Tree. Additionally, our approach allows adaptable, on-the-fly bit flip detection during query processing, with only very little impact on query latency. AN coding allows to recode intermediate results with virtually no performance penalty. We support our claims by providing exhaustive runtime and throughput measurements throughout the whole thesis and with an end-to-end evaluation using the Star Schema Benchmark. To the best of our knowledge, we are the first to present such holistic and fast bit flip detection in a large software infrastructure such as in-memory data management systems. Finally, most of the source code fragments used to obtain the results in this thesis are open source and freely available.:1 INTRODUCTION
1.1 Contributions of this Thesis
1.2 Outline
2 PROBLEM DESCRIPTION AND RELATED WORK
2.1 Reliable Data Management on Reliable Hardware
2.2 The Shift Towards Unreliable Hardware
2.3 Hardware-Based Mitigation of Bit Flips
2.4 Data Management System Requirements
2.5 Software-Based Techniques For Handling Bit Flips
2.5.1 Operating System-Level Techniques
2.5.2 Compiler-Level Techniques
2.5.3 Application-Level Techniques
2.6 Summary and Conclusions
3 ANALYSIS OF CODING TECHNIQUES
3.1 Selection of Error Codes
3.1.1 Hamming Coding
3.1.2 XOR Checksums
3.1.3 AN Coding
3.1.4 Summary and Conclusions
3.2 Probabilities of Silent Data Corruption
3.2.1 Probabilities of Hamming Codes
3.2.2 Probabilities of XOR Checksums
3.2.3 Probabilities of AN Codes
3.2.4 Concrete Error Models
3.2.5 Summary and Conclusions
3.3 Throughput Considerations
3.3.1 Test Systems Descriptions
3.3.2 Vectorizing Hamming Coding
3.3.3 Vectorizing XOR Checksums
3.3.4 Vectorizing AN Coding
3.3.5 Summary and Conclusions
3.4 Comparison of Error Codes
3.4.1 Effectiveness
3.4.2 Efficiency
3.4.3 Runtime Adaptability
3.5 Performance Optimizations for AN Coding
3.5.1 The Modular Multiplicative Inverse
3.5.2 Faster Softening
3.5.3 Faster Error Detection
3.5.4 Comparison to Original AN Coding
3.5.5 The Multiplicative Inverse Anomaly
3.6 Summary
4 BIT FLIP DETECTING STORAGE
4.1 Column Store Architecture
4.1.1 Logical Data Types
4.1.2 Storage Model
4.1.3 Data Representation
4.1.4 Data Layout
4.1.5 Tree Index Structures
4.1.6 Summary
4.2 Hardened Data Storage
4.2.1 Hardened Physical Data Types
4.2.2 Hardened Lightweight Compression
4.2.3 Hardened Data Layout
4.2.4 UDI Operations
4.2.5 Summary and Conclusions
4.3 Hardened Tree Index Structures
4.3.1 B-Tree Verification Techniques
4.3.2 Justification For Further Techniques
4.3.3 The Error Detecting B-Tree
4.4 Summary
5 BIT FLIP DETECTING QUERY PROCESSING
5.1 Column Store Query Processing
5.2 Bit Flip Detection Opportunities
5.2.1 Early Onetime Detection
5.2.2 Late Onetime Detection
5.2.3 Continuous Detection
5.2.4 Miscellaneous Processing Aspects
5.2.5 Summary and Conclusions
5.3 Hardened Intermediate Results
5.3.1 Materialization of Hardened Intermediates
5.3.2 Hardened Bitmaps
5.4 Summary
6 END-TO-END EVALUATION
6.1 Prototype Implementation
6.1.1 AHEAD Architecture
6.1.2 Diversity of Physical Operators
6.1.3 One Concrete Operator Realization
6.1.4 Summary and Conclusions
6.2 Performance of Individual Operators
6.2.1 Selection on One Predicate
6.2.2 Selection on Two Predicates
6.2.3 Join Operators
6.2.4 Grouping and Aggregation
6.2.5 Delta Operator
6.2.6 Summary and Conclusions
6.3 Star Schema Benchmark Queries
6.3.1 Query Runtimes
6.3.2 Improvements Through Vectorization
6.3.3 Storage Overhead
6.3.4 Summary and Conclusions
6.4 Error Detecting B-Tree
6.4.1 Single Key Lookup
6.4.2 Key Value-Pair Insertion
6.5 Summary
7 SUMMARY AND CONCLUSIONS
7.1 Future Work
A APPENDIX
A.1 List of Golden As
A.2 More on Hamming Coding
A.2.1 Code examples
A.2.2 Vectorization
BIBLIOGRAPHY
LIST OF FIGURES
LIST OF TABLES
LIST OF LISTINGS
LIST OF ACRONYMS
LIST OF SYMBOLS
LIST OF DEFINITION
A storage and access architecture for efficient query processing in spatial database systems
Due to the high complexity of objects and queries and also due to extremely
large data volumes, geographic database systems impose stringent requirements on their
storage and access architecture with respect to efficient query processing. Performance
improving concepts such as spatial storage and access structures, approximations, object
decompositions and multi-phase query processing have been suggested and analyzed as
single building blocks. In this paper, we describe a storage and access architecture which
is composed from the above building blocks in a modular fashion. Additionally, we incorporate
into our architecture a new ingredient, the scene organization, for efficiently
supporting set-oriented access of large-area region queries. An experimental performance
comparison demonstrates that the concept of scene organization leads to considerable
performance improvements for large-area region queries by a factor of up to 150
Yellow Tree: A Distributed Main-memory Spatial Index Structure for Moving Objects
Mobile devices equipped with wireless technologies to communicate and positioning systems to locate objects of interest are common place today, providing the impetus to develop location-aware applications. At the heart of location-aware applications are moving objects or objects that continuously change location over time, such as cars in transportation networks or pedestrians or postal packages. Location-aware applications tend to support the tracking of very large numbers of such moving objects as well as many users that are interested in finding out about the locations of other moving objects. Such location-aware applications rely on support from database management systems to model, store, and query moving object data. The management of moving object data exposes the limitations of traditional (spatial) database management systems as well as their index structures designed to keep track of objects\u27 locations. Spatial index structures that have been designed for geographic objects in the past primarily assume data are foremost of static nature (e.g., land parcels, road networks, or airport locations), thus requiring a limited amount of index structure updates and reorganization over a period of time. While handling moving objects however, there is an incumbent need for continuous reorganization of spatial index structures to remain up to date with constantly and rapidly changing object locations. This research addresses some of the key issues surrounding the efficient database management of moving objects whose location update rate to the database system varies from 1 to 30 minutes. Furthermore, we address the design of a highly scaleable and efficient spatial index structure to support location tracking and querying of large amounts of moving objects. We explore the possible architectural and the data structure level changes that are required to handle large numbers of moving objects. We focus specifically on the index structures that are needed to process spatial range queries and object-based queries on constantly changing moving object data. We argue for the case of main memory spatial index structures that dynamically adapt to continuously changing moving object data and concurrently answer spatial range queries efficiently. A proof-of concept implementation called the yellow tree, which is a distributed main-memory index structure, and a simulated environment to generate moving objects is demonstrated. Using experiments conducted on simulated moving object data, we conclude that a distributed main-memory based spatial index structure is required to handle dynamic location updates and efficiently answer spatial range queries on moving objects. Future work on enhancing the query processing performance of yellow tree is also discussed
- âŠ