220 research outputs found

    Oze: Decentralized Graph-based Concurrency Control for Real-world Long Transactions on BoM Benchmark

    Full text link
    In this paper, we propose Oze, a new concurrency control protocol that handles heterogeneous workloads which include long-running update transactions. Oze explores a large scheduling space using a fully precise multi-version serialization graph to reduce false positives. Oze manages the graph in a decentralized manner to exploit many cores in modern servers. We also propose a new OLTP benchmark, BoMB (Bill of Materials Benchmark), based on a use case in an actual manufacturing company. BoMB consists of one long-running update transaction and five short transactions that conflict with each other. Experiments using BoMB show that Oze keeps the abort rate of the long-running update transaction at zero while reaching up to 1.7 Mtpm for short transactions with near linear scalability, whereas state-of-the-art protocols cannot commit the long transaction or experience performance degradation in short transaction throughput

    Shirakami: A Hybrid Concurrency Control Protocol for Tsurugi Relational Database System

    Full text link
    Modern real-world transactional workloads such as bills of materials or telecommunication billing need to process both short transactions and long transactions. Recent concurrency control protocols do not cope with such workloads since they assume only classical workloads (i.e., YCSB and TPC-C) that have relatively short transactions. To this end, we proposed a new concurrency control protocol Shirakami. Shirakami has two sub-protocols. Shirakami-LTX protocol is for long transactions based on multiversion concurrency control and Shirakami-OCC protocol is for short transactions based on Silo. Shirakami naturally integrates them with write preservation method and epoch-based synchronization. Shirakami is a module in Tsurugi system, which is a production-purpose relational database system

    Wire-Speed Implementation of Sliding-Window Aggregate Operator over Out-of-Order Data Streams

    Get PDF
    This paper shows the design and evaluation of an FPGA-based accelerator for sliding-window aggregation over data streams with out-of-order data arrival. We propose an order-agnostic hardware implementation technique for windowing operators based on a one-pass query evaluation strategy called Window-ID, which is originally proposed for software implementation. The proposed implementation succeeds to process out-of-order data items, or tuples, at wire speed due to the simultaneous evaluations of overlapping sliding-windows. In order to verify the effectiveness of the proposed approach, we have also implemented an experimental system as a case study. Our experiments demonstrate that the proposed accelerator with a network interface achieves an effective throughput around 760 Mbps or equivalently nearly 6 million tuples per second, by fully utilizing the available bandwidth of the network interface

    Models and Issues on Probabilistic Data Streams with Bayesian Networks

    Get PDF
    This paper proposes the integration of probabilistic data streams and relational database by using Bayesian networks that is one of the most famous techniques for expressing uncertain contexts. A Baysian network is expressed by the graphical model while relational data are expressed by relation. To integrate them we make the relational model as the unified model for its simplicity. A Bayesian network is modeled as an abstract data type in an object relational database, and we define signatures to extract a probabilistic relation from a Bayesian network. We provide a scheme to integrate a probabilistic relation and normal relations. To allow continual queries over streams for a Bayesian network, we introduce a new concept, lifespan.2008 International Symposium on Applications and the Internet : Turku,Finland ; July 28-August 01, 200

    Skew-Aware Collective Communication for MapReduce Shuffling

    Get PDF
    This paper proposes and examines the three in-memory shuffling methods designed to address problems in MapReduce shuffling caused by skewed data. Coupled Shuffle Architecture (CSA) employs a single pairwise all-to-all exchange to shuffle both blocks, units of shuffle transfer, and meta-blocks, which contain the metadata of corresponding blocks. Decoupled Shuffle Architecture (DSA) separates the shuffling of meta-blocks and blocks, and applies different all-to-all exchange algorithms to each shuffling process, attempting to mitigate the impact of stragglers in strongly skewed distributions. Decoupled Shuffle Architecture with Skew-Aware Meta-Shuffle (DSA w/ SMS) autonomously determines the proper placement of blocks based on the memory consumption of each worker process. This approach targets extremely skewed situations where some worker processes could exceed their node memory limitation. This study evaluates implementations of the three shuffling methods in our prototype in-memory MapReduce engine, which employs high performance interconnects such as InfiniBand and Intel Omni-Path. Our results suggest that DSA w/ SMS is the only viable solution for extremely skewed data distributions. We also present a detailed investigation of the performance of CSA and DSA in various skew situations
    corecore