3,150 research outputs found

    B+-tree Index Optimization by Exploiting Internal Parallelism of Flash-based Solid State Drives

    Full text link
    Previous research addressed the potential problems of the hard-disk oriented design of DBMSs of flashSSDs. In this paper, we focus on exploiting potential benefits of flashSSDs. First, we examine the internal parallelism issues of flashSSDs by conducting benchmarks to various flashSSDs. Then, we suggest algorithm-design principles in order to best benefit from the internal parallelism. We present a new I/O request concept, called psync I/O that can exploit the internal parallelism of flashSSDs in a single process. Based on these ideas, we introduce B+-tree optimization methods in order to utilize internal parallelism. By integrating the results of these methods, we present a B+-tree variant, PIO B-tree. We confirmed that each optimization method substantially enhances the index performance. Consequently, PIO B-tree enhanced B+-tree's insert performance by a factor of up to 16.3, while improving point-search performance by a factor of 1.2. The range search of PIO B-tree was up to 5 times faster than that of the B+-tree. Moreover, PIO B-tree outperformed other flash-aware indexes in various synthetic workloads. We also confirmed that PIO B-tree outperforms B+-tree in index traces collected inside the Postgresql DBMS with TPC-C benchmark.Comment: VLDB201

    Bridges Between Islands: Cross-Chain Technology for Distributed Ledger Technology

    Get PDF
    Since the emergence of blockchain in 2008, today, we see a kaleidoscopic variety of applications built on distributed ledger technology (DLT), including applications for financial services, healthcare, or the Internet of Things. Yet, each application comes with specific requirements for DLT characteristics (e.g., high throughput, scalability). However, trade-offs between DLT characteristics restrict the development of a DLT design (e.g., Ethereum, IOTA) that fits all use cases’ requirements simultaneously. Consequently, separated DLT designs emerged, each specialized to suite dedicated application requirements. To enable the development of more powerful applications on DLT, such DLT islands must be bridged. However, knowledge on cross-chain technology (CCT) is scattered across scientific and practical sources. Therefore, we examine this diverse body of knowledge and provide comprehensive insights into CCT by synthesizing underlying characteristics, evolving patterns, and use cases. Our findings resolve existing contradictions in the literature and provide avenues for future research in an emerging scientific field

    Group-based replication of on-line transaction processing servers

    Get PDF
    Several techniques for database replication using group communication have recently been proposed, namely, the Database State Machine, Postgres-R, and the NODO protocol. Although all rely on a totally ordered multicast for consistency, they differ substantially on how multicast is used. This results in different performance trade-offs which are hard to compare as each protocol is presented using a different load scenario and evaluation method. In this paper we evaluate the suitability of such protocols for replication of On-Line Transaction Processing (OLTP) applications in clusters of servers and over wide area networks. This is achieved by implementing them using a common infra-structure and by using a standard workload. The results allows us to select the best protocol regarding performance and scalability in a demanding but realistic usage scenario.Projecto STRONGRE (POSI/CHS/41285/2001) financiado pela Fundação para a Ciência e a Tecnologia (FCT)

    Persistent Buffer Management with Optimistic Consistency

    Full text link
    Finding the best way to leverage non-volatile memory (NVM) on modern database systems is still an open problem. The answer is far from trivial since the clear boundary between memory and storage present in most systems seems to be incompatible with the intrinsic memory-storage duality of NVM. Rather than treating NVM either solely as memory or solely as storage, in this work we propose how NVM can be simultaneously used as both in the context of modern database systems. We design a persistent buffer pool on NVM, enabling pages to be directly read/written by the CPU (like memory) while recovering corrupted pages after a failure (like storage). The main benefits of our approach are an easy integration in the existing database architectures, reduced costs (by replacing DRAM with NVM), and faster peak-performance recovery

    A Survey on Transactional Stream Processing

    Full text link
    Transactional stream processing (TSP) strives to create a cohesive model that merges the advantages of both transactional and stream-oriented guarantees. Over the past decade, numerous endeavors have contributed to the evolution of TSP solutions, uncovering similarities and distinctions among them. Despite these advances, a universally accepted standard approach for integrating transactional functionality with stream processing remains to be established. Existing TSP solutions predominantly concentrate on specific application characteristics and involve complex design trade-offs. This survey intends to introduce TSP and present our perspective on its future progression. Our primary goals are twofold: to provide insights into the diverse TSP requirements and methodologies, and to inspire the design and development of groundbreaking TSP systems
    corecore