5,033 research outputs found
Persistent Buffer Management with Optimistic Consistency
Finding the best way to leverage non-volatile memory (NVM) on modern database
systems is still an open problem. The answer is far from trivial since the
clear boundary between memory and storage present in most systems seems to be
incompatible with the intrinsic memory-storage duality of NVM. Rather than
treating NVM either solely as memory or solely as storage, in this work we
propose how NVM can be simultaneously used as both in the context of modern
database systems. We design a persistent buffer pool on NVM, enabling pages to
be directly read/written by the CPU (like memory) while recovering corrupted
pages after a failure (like storage). The main benefits of our approach are an
easy integration in the existing database architectures, reduced costs (by
replacing DRAM with NVM), and faster peak-performance recovery
From Cooperative Scans to Predictive Buffer Management
In analytical applications, database systems often need to sustain workloads
with multiple concurrent scans hitting the same table. The Cooperative Scans
(CScans) framework, which introduces an Active Buffer Manager (ABM) component
into the database architecture, has been the most effective and elaborate
response to this problem, and was initially developed in the X100 research
prototype. We now report on the the experiences of integrating Cooperative
Scans into its industrial-strength successor, the Vectorwise database product.
During this implementation we invented a simpler optimization of concurrent
scan buffer management, called Predictive Buffer Management (PBM). PBM is based
on the observation that in a workload with long-running scans, the buffer
manager has quite a bit of information on the workload in the immediate future,
such that an approximation of the ideal OPT algorithm becomes feasible. In the
evaluation on both synthetic benchmarks as well as a TPC-H throughput run we
compare the benefits of naive buffer management (LRU) versus CScans, PBM and
OPT; showing that PBM achieves benefits close to Cooperative Scans, while
incurring much lower architectural impact.Comment: VLDB201
Polylogarithmic guarantees for generalized reordering buffer management
In the Generalized Reordering Buffer Management Problem (GRBM) a sequence of items located in a metric space arrives online, and has to be processed by a set of k servers moving within the space. In a single step the first b still unprocessed items from the sequence are accessible, and a scheduling strategy has to select an item and a server. Then the chosen item is processed by moving the chosen server to its location. The goal is to process all items while minimizing the total distance travelled by the servers. This problem was introduced in [Chan, Megow, Sitters, van Stee TCS 12] and has been subsequently studied in an online setting by [Azar, Englert, Gamzu, Kidron STACS 14]. The problem is a natural generalization of two very well-studied problems: the k-server problem for b=1 and the Reordering Buffer Management Problem (RBM) for k=1. In this paper we consider the GRBM problem on a uniform metric in the online version. We show how to obtain a competitive ratio of O(log k(log k+loglog b)) for this problem. Our result is a drastic improvement in the dependency on b compared to the previous best bound of O(√b log k), and is asymptotically optimal for constant k, because Ω(log k + loglog b) is a lower bound for GRBM on uniform metrics
HARQ Buffer Management: An Information-Theoretic View
A key practical constraint on the design of Hybrid automatic repeat request
(HARQ) schemes is the size of the on-chip buffer that is available at the
receiver to store previously received packets. In fact, in modern wireless
standards such as LTE and LTE-A, the HARQ buffer size is one of the main
drivers of the modem area and power consumption. This has recently highlighted
the importance of HARQ buffer management, that is, of the use of buffer-aware
transmission schemes and of advanced compression policies for the storage of
received data. This work investigates HARQ buffer management by leveraging
information-theoretic achievability arguments based on random coding.
Specifically, standard HARQ schemes, namely Type-I, Chase Combining and
Incremental Redundancy, are first studied under the assumption of a
finite-capacity HARQ buffer by considering both coded modulation, via Gaussian
signaling, and Bit Interleaved Coded Modulation (BICM). The analysis sheds
light on the impact of different compression strategies, namely the
conventional compression log-likelihood ratios and the direct digitization of
baseband signals, on the throughput. Then, coding strategies based on layered
modulation and optimized coding blocklength are investigated, highlighting the
benefits of HARQ buffer-aware transmission schemes. The optimization of
baseband compression for multiple-antenna links is also studied, demonstrating
the optimality of a transform coding approach.Comment: submitted to IEEE International Symposium on Information Theory
(ISIT) 2015. 29 pages, 12 figures, submitted to journal publicatio
Buffer Management System
The patent application included in this report describes a buffer management system for use in a packet network supporting general multipoint communication. In multipoint connections with multiple transmitters, it is not possible to enforce bandwidth usage strictly by monitoring individual transmitters. We describe a method together with practical implementation that monitors the bandwidth use of various connections within the network and in the event of overload, protects the connections that are operating within their bandwidth allotment
- …