755 research outputs found
Improving the Performance and Endurance of Persistent Memory with Loose-Ordering Consistency
Persistent memory provides high-performance data persistence at main memory.
Memory writes need to be performed in strict order to satisfy storage
consistency requirements and enable correct recovery from system crashes.
Unfortunately, adhering to such a strict order significantly degrades system
performance and persistent memory endurance. This paper introduces a new
mechanism, Loose-Ordering Consistency (LOC), that satisfies the ordering
requirements at significantly lower performance and endurance loss. LOC
consists of two key techniques. First, Eager Commit eliminates the need to
perform a persistent commit record write within a transaction. We do so by
ensuring that we can determine the status of all committed transactions during
recovery by storing necessary metadata information statically with blocks of
data written to memory. Second, Speculative Persistence relaxes the write
ordering between transactions by allowing writes to be speculatively written to
persistent memory. A speculative write is made visible to software only after
its associated transaction commits. To enable this, our mechanism supports the
tracking of committed transaction ID and multi-versioning in the CPU cache. Our
evaluations show that LOC reduces the average performance overhead of memory
persistence from 66.9% to 34.9% and the memory write traffic overhead from
17.1% to 3.4% on a variety of workloads.Comment: This paper has been accepted by IEEE Transactions on Parallel and
Distributed System
Implications of non-volatile memory as primary storage for database management systems
Traditional Database Management System (DBMS) software relies on hard disks for storing relational data. Hard disks are cheap, persistent, and offer huge storage capacities. However, data retrieval latency for hard disks is extremely high. To hide this latency, DRAM is used as an intermediate storage. DRAM is significantly faster than disk, but deployed in smaller capacities due to cost and power constraints, and without the necessary persistency feature that disks have. Non-Volatile Memory (NVM) is an emerging storage class technology which promises the best of both worlds. It can offer large storage capacities, due to better scaling and cost metrics than DRAM, and is non-volatile (persistent) like hard disks. At the same time, its data retrieval time is much lower than that of hard disks and it is also byte-addressable like DRAM. In this paper, we explore the implications of employing NVM as primary storage for DBMS. In other words, we investigate the modifications necessary to be applied on a traditional relational DBMS to take advantage of NVM features. As a case study, we have modified the storage engine (SE) of PostgreSQL enabling efficient use of NVM hardware. We detail the necessary changes and challenges such modifications entail and evaluate them using a comprehensive emulation platform. Results indicate that our modified SE reduces query execution time by up to 40% and 14.4% when compared to disk and NVM storage, with average reductions of 20.5% and 4.5%, respectively.The research leading to these results has received funding from the European Union’s 7th Framework Programme under grant agreement number 318633, the Ministry of Science and Technology of Spain under contract TIN2015-65316-P, and a HiPEAC collaboration grant awarded to Naveed Ul Mustafa.Peer ReviewedPostprint (author's final draft
MTrainS: Improving DLRM training efficiency using heterogeneous memories
Recommendation models are very large, requiring terabytes (TB) of memory
during training. In pursuit of better quality, the model size and complexity
grow over time, which requires additional training data to avoid overfitting.
This model growth demands a large number of resources in data centers. Hence,
training efficiency is becoming considerably more important to keep the data
center power demand manageable. In Deep Learning Recommendation Models (DLRM),
sparse features capturing categorical inputs through embedding tables are the
major contributors to model size and require high memory bandwidth. In this
paper, we study the bandwidth requirement and locality of embedding tables in
real-world deployed models. We observe that the bandwidth requirement is not
uniform across different tables and that embedding tables show high temporal
locality. We then design MTrainS, which leverages heterogeneous memory,
including byte and block addressable Storage Class Memory for DLRM
hierarchically. MTrainS allows for higher memory capacity per node and
increases training efficiency by lowering the need to scale out to multiple
hosts in memory capacity bound use cases. By optimizing the platform memory
hierarchy, we reduce the number of nodes for training by 4-8X, saving power and
cost of training while meeting our target training performance
Demystifying the Performance of HPC Scientific Applications on NVM-based Memory Systems
The emergence of high-density byte-addressable non-volatile memory (NVM) is
promising to accelerate data- and compute-intensive applications. Current NVM
technologies have lower performance than DRAM and, thus, are often paired with
DRAM in a heterogeneous main memory. Recently, byte-addressable NVM hardware
becomes available. This work provides a timely evaluation of representative HPC
applications from the "Seven Dwarfs" on NVM-based main memory. Our results
quantify the effectiveness of DRAM-cached-NVM for accelerating HPC applications
and enabling large problems beyond the DRAM capacity. On uncached-NVM, HPC
applications exhibit three tiers of performance sensitivity, i.e., insensitive,
scaled, and bottlenecked. We identify write throttling and concurrency control
as the priorities in optimizing applications. We highlight that concurrency
change may have a diverging effect on read and write accesses in applications.
Based on these findings, we explore two optimization approaches. First, we
provide a prediction model that uses datasets from a small set of
configurations to estimate performance at various concurrency and data sizes to
avoid exhaustive search in the configuration space. Second, we demonstrate that
write-aware data placement on uncached-NVM could achieve x performance
improvement with a 60% reduction in DRAM usage.Comment: 34th IEEE International Parallel and Distributed Processing Symposium
(IPDPS2020
- …