10 research outputs found

    A Memory Bandwidth-Efficient Hybrid Radix Sort on GPUs

    Full text link
    Sorting is at the core of many database operations, such as index creation, sort-merge joins, and user-requested output sorting. As GPUs are emerging as a promising platform to accelerate various operations, sorting on GPUs becomes a viable endeavour. Over the past few years, several improvements have been proposed for sorting on GPUs, leading to the first radix sort implementations that achieve a sorting rate of over one billion 32-bit keys per second. Yet, state-of-the-art approaches are heavily memory bandwidth-bound, as they require substantially more memory transfers than their CPU-based counterparts. Our work proposes a novel approach that almost halves the amount of memory transfers and, therefore, considerably lifts the memory bandwidth limitation. Being able to sort two gigabytes of eight-byte records in as little as 50 milliseconds, our approach achieves a 2.32-fold improvement over the state-of-the-art GPU-based radix sort for uniform distributions, sustaining a minimum speed-up of no less than a factor of 1.66 for skewed distributions. To address inputs that either do not reside on the GPU or exceed the available device memory, we build on our efficient GPU sorting approach with a pipelined heterogeneous sorting algorithm that mitigates the overhead associated with PCIe data transfers. Comparing the end-to-end sorting performance to the state-of-the-art CPU-based radix sort running 16 threads, our heterogeneous approach achieves a 2.06-fold and a 1.53-fold improvement for sorting 64 GB key-value pairs with a skewed and a uniform distribution, respectively.Comment: 16 pages, accepted at SIGMOD 201

    Massiv-Parallele Algorithmen zum Laden von Daten auf Moderner Hardware

    Get PDF
    While systems face an ever-growing amount of data that needs to be ingested, queried and analysed, processors are seeing only moderate improvements in sequential processing performance. This thesis addresses the fundamental shift towards increasingly parallel processors and contributes multiple massively parallel algorithms to accelerate different stages of the ingestion pipeline, such as data parsing and sorting.Systeme sehen sich mit einer stetig anwachsenden Menge an Daten konfrontiert, die geladen und analysiert, sowie Anfragen darauf bearbeitet werden mĂĽssen. Gleichzeitig nimmt die sequentielle Verarbeitungsgeschwindigkeit von Prozessoren nur noch moderat zu. Diese Arbeit adressiert den Wandel hin zu zunehmend parallelen Prozessoren und leistet mit mehreren massiv-parallelen Algorithmen einen Beitrag um unterschiedliche Phasen der Datenverarbeitung wie zum Beispiel Parsing und Sortierung zu beschleunigen

    Hardware-conscious Query Processing in GPU-accelerated Analytical Engines

    Get PDF
    In order to improve their power efficiency and computational capacity, modern servers are adopting hardware accelerators, especially GPUs. Modern analytical DMBS engines have been highly optimized for multi-core multi-CPU query execution, but lack the necessary abstractions to support concurrent hardware-conscious query execution over multiple heterogeneous devices and, thus, are unable to take full advantage of the available accelerators. In this work, we present a Heterogeneity-conscious Analytical query Processing Engine (HAPE), a hardware-conscious analytical engines that targets efficient concurrent multi-CPU multi-GPU query execution. HAPE decomposes heterogeneous query execution into i) efficient single-device and ii) concurrent multi-device query execution. It uses hardware-conscious algorithms designed for single-device execution and combines them into efficient intra-device hardware-conscious execution modules, via code generation. HAPE combines these modules to achieve concurrent multi-device execution by handling data and control transfers. We validate our design by building a prototype and evaluate its performance on a co-processing radix-join and TPC-H queries. We show that it achieves up to 10x and 3.5x speed-up on the join against CPU and GPU alternatives and 1.6x-8x against state-of-the-art CPU- and GPU-based commercial DBMS on the queries

    A Study of the Fundamental Performance Characteristics of GPUs and CPUs for Database Analytics (Extended Version)

    Full text link
    There has been significant amount of excitement and recent work on GPU-based database systems. Previous work has claimed that these systems can perform orders of magnitude better than CPU-based database systems on analytical workloads such as those found in decision support and business intelligence applications. A hardware expert would view these claims with suspicion. Given the general notion that database operators are memory-bandwidth bound, one would expect the maximum gain to be roughly equal to the ratio of the memory bandwidth of GPU to that of CPU. In this paper, we adopt a model-based approach to understand when and why the performance gains of running queries on GPUs vs on CPUs vary from the bandwidth ratio (which is roughly 16x on modern hardware). We propose Crystal, a library of parallel routines that can be combined together to run full SQL queries on a GPU with minimal materialization overhead. We implement individual query operators to show that while the speedups for selection, projection, and sorts are near the bandwidth ratio, joins achieve less speedup due to differences in hardware capabilities. Interestingly, we show on a popular analytical workload that full query performance gain from running on GPU exceeds the bandwidth ratio despite individual operators having speedup less than bandwidth ratio, as a result of limitations of vectorizing chained operators on CPUs, resulting in a 25x speedup for GPUs over CPUs on the benchmark

    HetExchange: Encapsulating heterogeneous CPU-GPU parallelism in JIT compiled engines

    Get PDF
    Modern server hardware is increasingly heterogeneous as hardware accelerators, such as GPUs, are used together with multicore CPUs to meet the computational demands of modern data analytics workloads. Unfortunately, query parallelization techniques used by analytical database engines are designed for homogeneous multicore servers, where query plans are parallelized across CPUs to process data stored in cache coherent shared memory. Thus, these techniques are unable to fully exploit available heterogeneous hardware, where one needs to exploit task-parallelism of CPUs and data-parallelism of GPUs for processing data stored in a deep, non-cache-coherent memory hierarchy with widely varying access latencies and bandwidth. In this paper, we introduce HetExchange–a parallel query execution framework that encapsulates the heterogeneous parallelism of modern multi-CPU–multi-GPU servers and enables the parallelization of (pre-)existing sequential relational operators. In contrast to the interpreted nature of traditional Exchange, HetExchange is designed to be used in conjunction with JIT compiled engines in order to allow a tight integration with the proposed operators and generation of efficient code for heterogeneous hardware. We validate the applicability and efficiency of our design by building a prototype that can operate over both CPUs and GPUs, and enables its operators to be parallelism- and data-location-agnostic. In doing so, we show that efficiently exploiting CPU–GPU parallelism can provide 2.8x and 6.4x improvement in performance compared to state-of-the-art CPU-based and GPU-based DBMS

    Novel bioinformatics tools for epitope-based peptide vaccine design

    Get PDF
    BACKGROUND T-cells are essential in the mediation of immune responses, helping clear bacteria, viruses and cancerous cells. T-cells recognise anomalies in the cellular proteome associated with infection and neoplasms through the T-cell receptor (TCR). The most common TCRs in humans, αβ TCRs, engage processed peptide epitopes presented on the major histocompatibility complex (pMHC). TCR-pMHC interaction is critical to vaccination. In this thesis I will discuss three pieces of software and outcomes derived from them that contribute to epitope-based vaccine design. RESULTS Three pieces of software were developed to help scientists study and understand T-cell responses. The first, STACEI allows users to interrogate the TCR-pMHC crystal structures. The time consuming, error-prone analysis that previously would have to be ran manually, is replaced by a single, flexible package. The second development is the introduction of general-purpose computing on the GPU (GP-GPU) in aiding the prediction of T-cell epitopes by scanning protein datasets using data derived from combinatorial peptide libraries (CPLs). Finally, I introduce RECIPIENT, a reverse vaccinology tool (RV) that combines pangenomic and population genetics methods to predict good vaccine targets across multiple pathogen samples. CONCLUSION Across this thesis, I introduce three different methods that aid the study of T-cells that will hopefully improve future vaccine design. These methods range across data types and methodologies, with methods focusing on mechanistic understanding of the TCR-pMHC binding event; the application of GP-GPU to CPLs and using microbial genomics to aid the study and understanding of antigen-specific T-cell responses. These three methods have a significant potential for further integration, especially the structural methods
    corecore